2025-04-14 00:00:10.776766 | Job console starting... 2025-04-14 00:00:10.787794 | Updating repositories 2025-04-14 00:00:11.103033 | Preparing job workspace 2025-04-14 00:00:12.792977 | Running Ansible setup... 2025-04-14 00:00:19.653942 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-04-14 00:00:20.667176 | 2025-04-14 00:00:20.667307 | PLAY [Base pre] 2025-04-14 00:00:20.733981 | 2025-04-14 00:00:20.734113 | TASK [Setup log path fact] 2025-04-14 00:00:20.799036 | orchestrator | ok 2025-04-14 00:00:20.840929 | 2025-04-14 00:00:20.841065 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-14 00:00:20.873899 | orchestrator | ok 2025-04-14 00:00:20.912528 | 2025-04-14 00:00:20.912638 | TASK [emit-job-header : Print job information] 2025-04-14 00:00:21.035111 | # Job Information 2025-04-14 00:00:21.035346 | Ansible Version: 2.15.3 2025-04-14 00:00:21.035384 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-04-14 00:00:21.035413 | Pipeline: periodic-midnight 2025-04-14 00:00:21.035434 | Executor: 7d211f194f6a 2025-04-14 00:00:21.035454 | Triggered by: https://github.com/osism/testbed 2025-04-14 00:00:21.035472 | Event ID: 91c6ae04327442f8a2f1ac14b6ef01e3 2025-04-14 00:00:21.046900 | 2025-04-14 00:00:21.047013 | LOOP [emit-job-header : Print node information] 2025-04-14 00:00:21.354359 | orchestrator | ok: 2025-04-14 00:00:21.354500 | orchestrator | # Node Information 2025-04-14 00:00:21.354527 | orchestrator | Inventory Hostname: orchestrator 2025-04-14 00:00:21.354546 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-04-14 00:00:21.354563 | orchestrator | Username: zuul-testbed06 2025-04-14 00:00:21.354580 | orchestrator | Distro: Debian 12.10 2025-04-14 00:00:21.354598 | orchestrator | Provider: static-testbed 2025-04-14 00:00:21.354615 | orchestrator | Label: testbed-orchestrator 2025-04-14 00:00:21.354631 | orchestrator | Product Name: OpenStack Nova 2025-04-14 00:00:21.354647 | orchestrator | Interface IP: 81.163.193.140 2025-04-14 00:00:21.381394 | 2025-04-14 00:00:21.381511 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-04-14 00:00:22.085253 | orchestrator -> localhost | changed 2025-04-14 00:00:22.092655 | 2025-04-14 00:00:22.092743 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-04-14 00:00:23.926532 | orchestrator -> localhost | changed 2025-04-14 00:00:23.945510 | 2025-04-14 00:00:23.945616 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-04-14 00:00:24.397244 | orchestrator -> localhost | ok 2025-04-14 00:00:24.414971 | 2025-04-14 00:00:24.415070 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-04-14 00:00:24.455044 | orchestrator | ok 2025-04-14 00:00:24.471293 | orchestrator | included: /var/lib/zuul/builds/8b19518b04be443abf0d643941e8b221/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-04-14 00:00:24.478539 | 2025-04-14 00:00:24.478623 | TASK [add-build-sshkey : Create Temp SSH key] 2025-04-14 00:00:26.059616 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-04-14 00:00:26.060874 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/8b19518b04be443abf0d643941e8b221/work/8b19518b04be443abf0d643941e8b221_id_rsa 2025-04-14 00:00:26.060949 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/8b19518b04be443abf0d643941e8b221/work/8b19518b04be443abf0d643941e8b221_id_rsa.pub 2025-04-14 00:00:26.060980 | orchestrator -> localhost | The key fingerprint is: 2025-04-14 00:00:26.061005 | orchestrator -> localhost | SHA256:4AV1FEtMZ5zWhdD0sON/FcpRiJPZhtTTBIwUudqWOcI zuul-build-sshkey 2025-04-14 00:00:26.061028 | orchestrator -> localhost | The key's randomart image is: 2025-04-14 00:00:26.061049 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-04-14 00:00:26.061070 | orchestrator -> localhost | | ...+O*/BB=.| 2025-04-14 00:00:26.061090 | orchestrator -> localhost | | . o.%+B== | 2025-04-14 00:00:26.061121 | orchestrator -> localhost | | . . ..+.+..| 2025-04-14 00:00:26.061141 | orchestrator -> localhost | | . o ...o..| 2025-04-14 00:00:26.061160 | orchestrator -> localhost | | . S o oo. .| 2025-04-14 00:00:26.061180 | orchestrator -> localhost | | E * ..| 2025-04-14 00:00:26.061206 | orchestrator -> localhost | | o . o| 2025-04-14 00:00:26.061228 | orchestrator -> localhost | | .| 2025-04-14 00:00:26.061249 | orchestrator -> localhost | | | 2025-04-14 00:00:26.061269 | orchestrator -> localhost | +----[SHA256]-----+ 2025-04-14 00:00:26.061328 | orchestrator -> localhost | ok: Runtime: 0:00:00.801913 2025-04-14 00:00:26.069830 | 2025-04-14 00:00:26.069923 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-04-14 00:00:26.135429 | orchestrator | ok 2025-04-14 00:00:26.150391 | orchestrator | included: /var/lib/zuul/builds/8b19518b04be443abf0d643941e8b221/trusted/project_1/opendev.org/zuul/zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-04-14 00:00:26.185666 | 2025-04-14 00:00:26.185766 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-04-14 00:00:26.224312 | orchestrator | skipping: Conditional result was False 2025-04-14 00:00:26.232687 | 2025-04-14 00:00:26.232777 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-04-14 00:00:26.902515 | orchestrator | changed 2025-04-14 00:00:26.916971 | 2025-04-14 00:00:26.917071 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-04-14 00:00:27.196809 | orchestrator | ok 2025-04-14 00:00:27.207907 | 2025-04-14 00:00:27.208006 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-04-14 00:00:27.665437 | orchestrator | ok 2025-04-14 00:00:27.726010 | 2025-04-14 00:00:27.726109 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-04-14 00:00:28.145828 | orchestrator | ok 2025-04-14 00:00:28.152136 | 2025-04-14 00:00:28.152232 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-04-14 00:00:28.222213 | orchestrator | skipping: Conditional result was False 2025-04-14 00:00:28.230577 | 2025-04-14 00:00:28.230672 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-04-14 00:00:28.931695 | orchestrator -> localhost | changed 2025-04-14 00:00:28.950604 | 2025-04-14 00:00:28.950706 | TASK [add-build-sshkey : Add back temp key] 2025-04-14 00:00:29.265402 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/8b19518b04be443abf0d643941e8b221/work/8b19518b04be443abf0d643941e8b221_id_rsa (zuul-build-sshkey) 2025-04-14 00:00:29.265585 | orchestrator -> localhost | ok: Runtime: 0:00:00.007571 2025-04-14 00:00:29.272908 | 2025-04-14 00:00:29.273006 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-04-14 00:00:29.627656 | orchestrator | ok 2025-04-14 00:00:29.638420 | 2025-04-14 00:00:29.638510 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-04-14 00:00:29.667396 | orchestrator | skipping: Conditional result was False 2025-04-14 00:00:29.680831 | 2025-04-14 00:00:29.680937 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-04-14 00:00:30.131835 | orchestrator | ok 2025-04-14 00:00:30.171404 | 2025-04-14 00:00:30.171505 | TASK [validate-host : Define zuul_info_dir fact] 2025-04-14 00:00:30.238339 | orchestrator | ok 2025-04-14 00:00:30.248937 | 2025-04-14 00:00:30.249034 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-04-14 00:00:30.532026 | orchestrator -> localhost | ok 2025-04-14 00:00:30.540546 | 2025-04-14 00:00:30.540654 | TASK [validate-host : Collect information about the host] 2025-04-14 00:00:31.783116 | orchestrator | ok 2025-04-14 00:00:31.824309 | 2025-04-14 00:00:31.824428 | TASK [validate-host : Sanitize hostname] 2025-04-14 00:00:31.885476 | orchestrator | ok 2025-04-14 00:00:31.891592 | 2025-04-14 00:00:31.891679 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-04-14 00:00:32.660739 | orchestrator -> localhost | changed 2025-04-14 00:00:32.667183 | 2025-04-14 00:00:32.667288 | TASK [validate-host : Collect information about zuul worker] 2025-04-14 00:00:33.177795 | orchestrator | ok 2025-04-14 00:00:33.185091 | 2025-04-14 00:00:33.185185 | TASK [validate-host : Write out all zuul information for each host] 2025-04-14 00:00:34.019042 | orchestrator -> localhost | changed 2025-04-14 00:00:34.038346 | 2025-04-14 00:00:34.038447 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-04-14 00:00:34.366420 | orchestrator | ok 2025-04-14 00:00:34.374703 | 2025-04-14 00:00:34.374934 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-04-14 00:00:58.336664 | orchestrator | changed: 2025-04-14 00:00:58.336904 | orchestrator | .d..t...... src/ 2025-04-14 00:00:58.336942 | orchestrator | .d..t...... src/github.com/ 2025-04-14 00:00:58.336967 | orchestrator | .d..t...... src/github.com/osism/ 2025-04-14 00:00:58.336988 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-04-14 00:00:58.337007 | orchestrator | RedHat.yml 2025-04-14 00:00:58.352041 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-04-14 00:00:58.352058 | orchestrator | RedHat.yml 2025-04-14 00:00:58.352110 | orchestrator | = 1.53.0"... 2025-04-14 00:01:11.752937 | orchestrator | 00:01:11.752 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-04-14 00:01:12.928730 | orchestrator | 00:01:12.928 STDOUT terraform: - Installing hashicorp/null v3.2.3... 2025-04-14 00:01:13.847183 | orchestrator | 00:01:13.846 STDOUT terraform: - Installed hashicorp/null v3.2.3 (signed, key ID 0C0AF313E5FD9F80) 2025-04-14 00:01:15.139497 | orchestrator | 00:01:15.139 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-04-14 00:01:16.568709 | orchestrator | 00:01:16.568 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-04-14 00:01:17.612832 | orchestrator | 00:01:17.612 STDOUT terraform: - Installing hashicorp/local v2.5.2... 2025-04-14 00:01:18.581074 | orchestrator | 00:01:18.580 STDOUT terraform: - Installed hashicorp/local v2.5.2 (signed, key ID 0C0AF313E5FD9F80) 2025-04-14 00:01:18.581181 | orchestrator | 00:01:18.580 STDOUT terraform: Providers are signed by their developers. 2025-04-14 00:01:18.581218 | orchestrator | 00:01:18.580 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-04-14 00:01:18.581398 | orchestrator | 00:01:18.581 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-04-14 00:01:18.581444 | orchestrator | 00:01:18.581 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-04-14 00:01:18.581608 | orchestrator | 00:01:18.581 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-04-14 00:01:18.581720 | orchestrator | 00:01:18.581 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-04-14 00:01:18.581783 | orchestrator | 00:01:18.581 STDOUT terraform: you run "tofu init" in the future. 2025-04-14 00:01:18.581859 | orchestrator | 00:01:18.581 STDOUT terraform: OpenTofu has been successfully initialized! 2025-04-14 00:01:18.582100 | orchestrator | 00:01:18.581 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-04-14 00:01:18.582173 | orchestrator | 00:01:18.581 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-04-14 00:01:18.582203 | orchestrator | 00:01:18.582 STDOUT terraform: should now work. 2025-04-14 00:01:18.582394 | orchestrator | 00:01:18.582 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-04-14 00:01:18.582487 | orchestrator | 00:01:18.582 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-04-14 00:01:18.582608 | orchestrator | 00:01:18.582 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-04-14 00:01:18.847708 | orchestrator | 00:01:18.847 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-14 00:01:19.020216 | orchestrator | 00:01:19.019 STDOUT terraform: Created and switched to workspace "ci"! 2025-04-14 00:01:19.020363 | orchestrator | 00:01:19.020 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-04-14 00:01:19.020620 | orchestrator | 00:01:19.020 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-04-14 00:01:19.020686 | orchestrator | 00:01:19.020 STDOUT terraform: for this configuration. 2025-04-14 00:01:19.255169 | orchestrator | 00:01:19.254 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-14 00:01:19.352254 | orchestrator | 00:01:19.352 STDOUT terraform: ci.auto.tfvars 2025-04-14 00:01:19.548360 | orchestrator | 00:01:19.548 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-04-14 00:01:20.488398 | orchestrator | 00:01:20.488 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-04-14 00:01:21.028218 | orchestrator | 00:01:21.027 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-04-14 00:01:21.265251 | orchestrator | 00:01:21.265 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-04-14 00:01:21.265347 | orchestrator | 00:01:21.265 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-04-14 00:01:21.265362 | orchestrator | 00:01:21.265 STDOUT terraform:  + create 2025-04-14 00:01:21.265390 | orchestrator | 00:01:21.265 STDOUT terraform:  <= read (data resources) 2025-04-14 00:01:21.265402 | orchestrator | 00:01:21.265 STDOUT terraform: OpenTofu will perform the following actions: 2025-04-14 00:01:21.265414 | orchestrator | 00:01:21.265 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-04-14 00:01:21.265426 | orchestrator | 00:01:21.265 STDOUT terraform:  # (config refers to values not yet known) 2025-04-14 00:01:21.265440 | orchestrator | 00:01:21.265 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-04-14 00:01:21.265472 | orchestrator | 00:01:21.265 STDOUT terraform:  + checksum = (known after apply) 2025-04-14 00:01:21.265484 | orchestrator | 00:01:21.265 STDOUT terraform:  + created_at = (known after apply) 2025-04-14 00:01:21.265498 | orchestrator | 00:01:21.265 STDOUT terraform:  + file = (known after apply) 2025-04-14 00:01:21.265527 | orchestrator | 00:01:21.265 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.265541 | orchestrator | 00:01:21.265 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.265554 | orchestrator | 00:01:21.265 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-14 00:01:21.265587 | orchestrator | 00:01:21.265 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-14 00:01:21.265607 | orchestrator | 00:01:21.265 STDOUT terraform:  + most_recent = true 2025-04-14 00:01:21.265633 | orchestrator | 00:01:21.265 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.265665 | orchestrator | 00:01:21.265 STDOUT terraform:  + protected = (known after apply) 2025-04-14 00:01:21.265678 | orchestrator | 00:01:21.265 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.265710 | orchestrator | 00:01:21.265 STDOUT terraform:  + schema = (known after apply) 2025-04-14 00:01:21.265740 | orchestrator | 00:01:21.265 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-14 00:01:21.265772 | orchestrator | 00:01:21.265 STDOUT terraform:  + tags = (known after apply) 2025-04-14 00:01:21.265803 | orchestrator | 00:01:21.265 STDOUT terraform:  + updated_at = (known after apply) 2025-04-14 00:01:21.265827 | orchestrator | 00:01:21.265 STDOUT terraform:  } 2025-04-14 00:01:21.266100 | orchestrator | 00:01:21.266 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-04-14 00:01:21.266124 | orchestrator | 00:01:21.266 STDOUT terraform:  # (config refers to values not yet known) 2025-04-14 00:01:21.266160 | orchestrator | 00:01:21.266 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-04-14 00:01:21.266194 | orchestrator | 00:01:21.266 STDOUT terraform:  + checksum = (known after apply) 2025-04-14 00:01:21.266220 | orchestrator | 00:01:21.266 STDOUT terraform:  + created_at = (known after apply) 2025-04-14 00:01:21.266250 | orchestrator | 00:01:21.266 STDOUT terraform:  + file = (known after apply) 2025-04-14 00:01:21.266281 | orchestrator | 00:01:21.266 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.266310 | orchestrator | 00:01:21.266 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.266341 | orchestrator | 00:01:21.266 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-04-14 00:01:21.266370 | orchestrator | 00:01:21.266 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-04-14 00:01:21.266384 | orchestrator | 00:01:21.266 STDOUT terraform:  + most_recent = true 2025-04-14 00:01:21.266417 | orchestrator | 00:01:21.266 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.266449 | orchestrator | 00:01:21.266 STDOUT terraform:  + protected = (known after apply) 2025-04-14 00:01:21.266500 | orchestrator | 00:01:21.266 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.266532 | orchestrator | 00:01:21.266 STDOUT terraform:  + schema = (known after apply) 2025-04-14 00:01:21.266548 | orchestrator | 00:01:21.266 STDOUT terraform:  + size_bytes = (known after apply) 2025-04-14 00:01:21.266561 | orchestrator | 00:01:21.266 STDOUT terraform:  + tags = (known after apply) 2025-04-14 00:01:21.266586 | orchestrator | 00:01:21.266 STDOUT terraform:  + updated_at = (known after apply) 2025-04-14 00:01:21.266600 | orchestrator | 00:01:21.266 STDOUT terraform:  } 2025-04-14 00:01:21.266728 | orchestrator | 00:01:21.266 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-04-14 00:01:21.266746 | orchestrator | 00:01:21.266 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-04-14 00:01:21.266789 | orchestrator | 00:01:21.266 STDOUT terraform:  + content = (known after apply) 2025-04-14 00:01:21.266825 | orchestrator | 00:01:21.266 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-14 00:01:21.266850 | orchestrator | 00:01:21.266 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-14 00:01:21.266893 | orchestrator | 00:01:21.266 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-14 00:01:21.266933 | orchestrator | 00:01:21.266 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-14 00:01:21.266964 | orchestrator | 00:01:21.266 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-14 00:01:21.267056 | orchestrator | 00:01:21.266 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-14 00:01:21.267072 | orchestrator | 00:01:21.267 STDOUT terraform:  + directory_permission = "0777" 2025-04-14 00:01:21.267085 | orchestrator | 00:01:21.267 STDOUT terraform:  + file_permission = "0644" 2025-04-14 00:01:21.267099 | orchestrator | 00:01:21.267 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-04-14 00:01:21.267141 | orchestrator | 00:01:21.267 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.267155 | orchestrator | 00:01:21.267 STDOUT terraform:  } 2025-04-14 00:01:21.267181 | orchestrator | 00:01:21.267 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-04-14 00:01:21.267206 | orchestrator | 00:01:21.267 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-04-14 00:01:21.267244 | orchestrator | 00:01:21.267 STDOUT terraform:  + content = (known after apply) 2025-04-14 00:01:21.267279 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-14 00:01:21.267314 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-14 00:01:21.267351 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-14 00:01:21.267387 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-14 00:01:21.267423 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-14 00:01:21.267459 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-14 00:01:21.267473 | orchestrator | 00:01:21.267 STDOUT terraform:  + directory_permission = "0777" 2025-04-14 00:01:21.267503 | orchestrator | 00:01:21.267 STDOUT terraform:  + file_permission = "0644" 2025-04-14 00:01:21.267536 | orchestrator | 00:01:21.267 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-04-14 00:01:21.267573 | orchestrator | 00:01:21.267 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.267585 | orchestrator | 00:01:21.267 STDOUT terraform:  } 2025-04-14 00:01:21.267694 | orchestrator | 00:01:21.267 STDOUT terraform:  # local_file.inventory will be created 2025-04-14 00:01:21.267708 | orchestrator | 00:01:21.267 STDOUT terraform:  + resource "local_file" "inventory" { 2025-04-14 00:01:21.267748 | orchestrator | 00:01:21.267 STDOUT terraform:  + content = (known after apply) 2025-04-14 00:01:21.267782 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-14 00:01:21.267819 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-14 00:01:21.267855 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-14 00:01:21.267890 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-14 00:01:21.267925 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-14 00:01:21.267960 | orchestrator | 00:01:21.267 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-14 00:01:21.267986 | orchestrator | 00:01:21.267 STDOUT terraform:  + directory_permission = "0777" 2025-04-14 00:01:21.268018 | orchestrator | 00:01:21.267 STDOUT terraform:  + file_permission = "0644" 2025-04-14 00:01:21.268051 | orchestrator | 00:01:21.268 STDOUT terraform:  + filename = "inventory.ci" 2025-04-14 00:01:21.268085 | orchestrator | 00:01:21.268 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.268097 | orchestrator | 00:01:21.268 STDOUT terraform:  } 2025-04-14 00:01:21.268133 | orchestrator | 00:01:21.268 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-04-14 00:01:21.268163 | orchestrator | 00:01:21.268 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-04-14 00:01:21.268198 | orchestrator | 00:01:21.268 STDOUT terraform:  + content = (sensitive value) 2025-04-14 00:01:21.268234 | orchestrator | 00:01:21.268 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-04-14 00:01:21.268267 | orchestrator | 00:01:21.268 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-04-14 00:01:21.268299 | orchestrator | 00:01:21.268 STDOUT terraform:  + content_md5 = (known after apply) 2025-04-14 00:01:21.268340 | orchestrator | 00:01:21.268 STDOUT terraform:  + content_sha1 = (known after apply) 2025-04-14 00:01:21.268370 | orchestrator | 00:01:21.268 STDOUT terraform:  + content_sha256 = (known after apply) 2025-04-14 00:01:21.268406 | orchestrator | 00:01:21.268 STDOUT terraform:  + content_sha512 = (known after apply) 2025-04-14 00:01:21.268430 | orchestrator | 00:01:21.268 STDOUT terraform:  + directory_permission = "0700" 2025-04-14 00:01:21.268455 | orchestrator | 00:01:21.268 STDOUT terraform:  + file_permission = "0600" 2025-04-14 00:01:21.268486 | orchestrator | 00:01:21.268 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-04-14 00:01:21.268522 | orchestrator | 00:01:21.268 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.268535 | orchestrator | 00:01:21.268 STDOUT terraform:  } 2025-04-14 00:01:21.268562 | orchestrator | 00:01:21.268 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-04-14 00:01:21.268591 | orchestrator | 00:01:21.268 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-04-14 00:01:21.268613 | orchestrator | 00:01:21.268 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.268625 | orchestrator | 00:01:21.268 STDOUT terraform:  } 2025-04-14 00:01:21.268676 | orchestrator | 00:01:21.268 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-04-14 00:01:21.268723 | orchestrator | 00:01:21.268 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-04-14 00:01:21.268753 | orchestrator | 00:01:21.268 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.268766 | orchestrator | 00:01:21.268 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.268801 | orchestrator | 00:01:21.268 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.268834 | orchestrator | 00:01:21.268 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.268864 | orchestrator | 00:01:21.268 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.268911 | orchestrator | 00:01:21.268 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-04-14 00:01:21.268943 | orchestrator | 00:01:21.268 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.268956 | orchestrator | 00:01:21.268 STDOUT terraform:  + size = 80 2025-04-14 00:01:21.268980 | orchestrator | 00:01:21.268 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.269007 | orchestrator | 00:01:21.268 STDOUT terraform:  } 2025-04-14 00:01:21.269053 | orchestrator | 00:01:21.268 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-04-14 00:01:21.269100 | orchestrator | 00:01:21.269 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-14 00:01:21.269130 | orchestrator | 00:01:21.269 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.269143 | orchestrator | 00:01:21.269 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.269179 | orchestrator | 00:01:21.269 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.269214 | orchestrator | 00:01:21.269 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.269242 | orchestrator | 00:01:21.269 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.269282 | orchestrator | 00:01:21.269 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-04-14 00:01:21.269313 | orchestrator | 00:01:21.269 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.269325 | orchestrator | 00:01:21.269 STDOUT terraform:  + size = 80 2025-04-14 00:01:21.269351 | orchestrator | 00:01:21.269 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.269363 | orchestrator | 00:01:21.269 STDOUT terraform:  } 2025-04-14 00:01:21.269408 | orchestrator | 00:01:21.269 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-04-14 00:01:21.269453 | orchestrator | 00:01:21.269 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-14 00:01:21.269483 | orchestrator | 00:01:21.269 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.269505 | orchestrator | 00:01:21.269 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.269536 | orchestrator | 00:01:21.269 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.269567 | orchestrator | 00:01:21.269 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.269598 | orchestrator | 00:01:21.269 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.269637 | orchestrator | 00:01:21.269 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-04-14 00:01:21.269671 | orchestrator | 00:01:21.269 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.269690 | orchestrator | 00:01:21.269 STDOUT terraform:  + size = 80 2025-04-14 00:01:21.269702 | orchestrator | 00:01:21.269 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.269713 | orchestrator | 00:01:21.269 STDOUT terraform:  } 2025-04-14 00:01:21.269767 | orchestrator | 00:01:21.269 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-04-14 00:01:21.269812 | orchestrator | 00:01:21.269 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-14 00:01:21.269843 | orchestrator | 00:01:21.269 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.269857 | orchestrator | 00:01:21.269 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.269893 | orchestrator | 00:01:21.269 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.269925 | orchestrator | 00:01:21.269 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.269954 | orchestrator | 00:01:21.269 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.270134 | orchestrator | 00:01:21.269 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-04-14 00:01:21.270165 | orchestrator | 00:01:21.269 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.270173 | orchestrator | 00:01:21.270 STDOUT terraform:  + size = 80 2025-04-14 00:01:21.270182 | orchestrator | 00:01:21.270 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.270191 | orchestrator | 00:01:21.270 STDOUT terraform:  } 2025-04-14 00:01:21.270202 | orchestrator | 00:01:21.270 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-04-14 00:01:21.270232 | orchestrator | 00:01:21.270 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-14 00:01:21.270243 | orchestrator | 00:01:21.270 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.270307 | orchestrator | 00:01:21.270 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.270339 | orchestrator | 00:01:21.270 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.270370 | orchestrator | 00:01:21.270 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.270400 | orchestrator | 00:01:21.270 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.270440 | orchestrator | 00:01:21.270 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-04-14 00:01:21.270470 | orchestrator | 00:01:21.270 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.270482 | orchestrator | 00:01:21.270 STDOUT terraform:  + size = 80 2025-04-14 00:01:21.270511 | orchestrator | 00:01:21.270 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.270522 | orchestrator | 00:01:21.270 STDOUT terraform:  } 2025-04-14 00:01:21.270574 | orchestrator | 00:01:21.270 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-04-14 00:01:21.270649 | orchestrator | 00:01:21.270 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-14 00:01:21.270661 | orchestrator | 00:01:21.270 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.270681 | orchestrator | 00:01:21.270 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.270716 | orchestrator | 00:01:21.270 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.270729 | orchestrator | 00:01:21.270 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.270740 | orchestrator | 00:01:21.270 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.270782 | orchestrator | 00:01:21.270 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-04-14 00:01:21.270815 | orchestrator | 00:01:21.270 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.270826 | orchestrator | 00:01:21.270 STDOUT terraform:  + size = 80 2025-04-14 00:01:21.270852 | orchestrator | 00:01:21.270 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.270863 | orchestrator | 00:01:21.270 STDOUT terraform:  } 2025-04-14 00:01:21.270909 | orchestrator | 00:01:21.270 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-04-14 00:01:21.270954 | orchestrator | 00:01:21.270 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-04-14 00:01:21.270984 | orchestrator | 00:01:21.270 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.271014 | orchestrator | 00:01:21.270 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.271043 | orchestrator | 00:01:21.271 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.271075 | orchestrator | 00:01:21.271 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.271106 | orchestrator | 00:01:21.271 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.271145 | orchestrator | 00:01:21.271 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-04-14 00:01:21.271175 | orchestrator | 00:01:21.271 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.271198 | orchestrator | 00:01:21.271 STDOUT terraform:  + size = 80 2025-04-14 00:01:21.271220 | orchestrator | 00:01:21.271 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.271231 | orchestrator | 00:01:21.271 STDOUT terraform:  } 2025-04-14 00:01:21.271274 | orchestrator | 00:01:21.271 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-04-14 00:01:21.271317 | orchestrator | 00:01:21.271 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.271347 | orchestrator | 00:01:21.271 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.271358 | orchestrator | 00:01:21.271 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.271415 | orchestrator | 00:01:21.271 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.271429 | orchestrator | 00:01:21.271 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.271475 | orchestrator | 00:01:21.271 STDOUT terraform:  + name = "testbed-volume-0-node-0" 2025-04-14 00:01:21.271489 | orchestrator | 00:01:21.271 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.271500 | orchestrator | 00:01:21.271 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.271517 | orchestrator | 00:01:21.271 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.271528 | orchestrator | 00:01:21.271 STDOUT terraform:  } 2025-04-14 00:01:21.271601 | orchestrator | 00:01:21.271 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-04-14 00:01:21.271616 | orchestrator | 00:01:21.271 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.271651 | orchestrator | 00:01:21.271 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.271663 | orchestrator | 00:01:21.271 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.271699 | orchestrator | 00:01:21.271 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.271730 | orchestrator | 00:01:21.271 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.271766 | orchestrator | 00:01:21.271 STDOUT terraform:  + name = "testbed-volume-1-node-1" 2025-04-14 00:01:21.271798 | orchestrator | 00:01:21.271 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.271819 | orchestrator | 00:01:21.271 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.271841 | orchestrator | 00:01:21.271 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.271851 | orchestrator | 00:01:21.271 STDOUT terraform:  } 2025-04-14 00:01:21.271895 | orchestrator | 00:01:21.271 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-04-14 00:01:21.271939 | orchestrator | 00:01:21.271 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.272027 | orchestrator | 00:01:21.271 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.272038 | orchestrator | 00:01:21.271 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.272050 | orchestrator | 00:01:21.271 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.272094 | orchestrator | 00:01:21.272 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.272106 | orchestrator | 00:01:21.272 STDOUT terraform:  + name = "testbed-volume-2-node-2" 2025-04-14 00:01:21.272117 | orchestrator | 00:01:21.272 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.272144 | orchestrator | 00:01:21.272 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.272156 | orchestrator | 00:01:21.272 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.272166 | orchestrator | 00:01:21.272 STDOUT terraform:  } 2025-04-14 00:01:21.272213 | orchestrator | 00:01:21.272 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-04-14 00:01:21.272264 | orchestrator | 00:01:21.272 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.272277 | orchestrator | 00:01:21.272 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.272306 | orchestrator | 00:01:21.272 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.272333 | orchestrator | 00:01:21.272 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.272361 | orchestrator | 00:01:21.272 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.272397 | orchestrator | 00:01:21.272 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-04-14 00:01:21.272423 | orchestrator | 00:01:21.272 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.272435 | orchestrator | 00:01:21.272 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.272445 | orchestrator | 00:01:21.272 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.272456 | orchestrator | 00:01:21.272 STDOUT terraform:  } 2025-04-14 00:01:21.272564 | orchestrator | 00:01:21.272 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-04-14 00:01:21.272576 | orchestrator | 00:01:21.272 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.272586 | orchestrator | 00:01:21.272 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.272597 | orchestrator | 00:01:21.272 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.272634 | orchestrator | 00:01:21.272 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.272662 | orchestrator | 00:01:21.272 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.272699 | orchestrator | 00:01:21.272 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-04-14 00:01:21.272726 | orchestrator | 00:01:21.272 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.272737 | orchestrator | 00:01:21.272 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.272797 | orchestrator | 00:01:21.272 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.272808 | orchestrator | 00:01:21.272 STDOUT terraform:  } 2025-04-14 00:01:21.272818 | orchestrator | 00:01:21.272 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-04-14 00:01:21.272853 | orchestrator | 00:01:21.272 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.272880 | orchestrator | 00:01:21.272 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.272892 | orchestrator | 00:01:21.272 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.272931 | orchestrator | 00:01:21.272 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.272962 | orchestrator | 00:01:21.272 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.273015 | orchestrator | 00:01:21.272 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-04-14 00:01:21.273057 | orchestrator | 00:01:21.272 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.273084 | orchestrator | 00:01:21.273 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.273095 | orchestrator | 00:01:21.273 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.273106 | orchestrator | 00:01:21.273 STDOUT terraform:  } 2025-04-14 00:01:21.273154 | orchestrator | 00:01:21.273 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-04-14 00:01:21.273198 | orchestrator | 00:01:21.273 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.273225 | orchestrator | 00:01:21.273 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.273242 | orchestrator | 00:01:21.273 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.273270 | orchestrator | 00:01:21.273 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.273297 | orchestrator | 00:01:21.273 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.273336 | orchestrator | 00:01:21.273 STDOUT terraform:  + name = "testbed-volume-6-node-0" 2025-04-14 00:01:21.273364 | orchestrator | 00:01:21.273 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.273375 | orchestrator | 00:01:21.273 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.273412 | orchestrator | 00:01:21.273 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.273454 | orchestrator | 00:01:21.273 STDOUT terraform:  } 2025-04-14 00:01:21.273466 | orchestrator | 00:01:21.273 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-04-14 00:01:21.273501 | orchestrator | 00:01:21.273 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.273554 | orchestrator | 00:01:21.273 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.273565 | orchestrator | 00:01:21.273 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.273575 | orchestrator | 00:01:21.273 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.273609 | orchestrator | 00:01:21.273 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.273637 | orchestrator | 00:01:21.273 STDOUT terraform:  + name = "testbed-volume-7-node-1" 2025-04-14 00:01:21.273665 | orchestrator | 00:01:21.273 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.273676 | orchestrator | 00:01:21.273 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.273687 | orchestrator | 00:01:21.273 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.273697 | orchestrator | 00:01:21.273 STDOUT terraform:  } 2025-04-14 00:01:21.273751 | orchestrator | 00:01:21.273 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-04-14 00:01:21.273795 | orchestrator | 00:01:21.273 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.273822 | orchestrator | 00:01:21.273 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.273839 | orchestrator | 00:01:21.273 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.273867 | orchestrator | 00:01:21.273 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.273894 | orchestrator | 00:01:21.273 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.273932 | orchestrator | 00:01:21.273 STDOUT terraform:  + name = "testbed-volume-8-node-2" 2025-04-14 00:01:21.273959 | orchestrator | 00:01:21.273 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.273971 | orchestrator | 00:01:21.273 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.273981 | orchestrator | 00:01:21.273 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.274004 | orchestrator | 00:01:21.273 STDOUT terraform:  } 2025-04-14 00:01:21.274077 | orchestrator | 00:01:21.273 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[9] will be created 2025-04-14 00:01:21.274119 | orchestrator | 00:01:21.274 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.274146 | orchestrator | 00:01:21.274 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.274158 | orchestrator | 00:01:21.274 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.274196 | orchestrator | 00:01:21.274 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.274231 | orchestrator | 00:01:21.274 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.274270 | orchestrator | 00:01:21.274 STDOUT terraform:  + name = "testbed-volume-9-node-3" 2025-04-14 00:01:21.274297 | orchestrator | 00:01:21.274 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.274308 | orchestrator | 00:01:21.274 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.274343 | orchestrator | 00:01:21.274 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.274387 | orchestrator | 00:01:21.274 STDOUT terraform:  } 2025-04-14 00:01:21.274399 | orchestrator | 00:01:21.274 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[10] will be created 2025-04-14 00:01:21.274432 | orchestrator | 00:01:21.274 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.274459 | orchestrator | 00:01:21.274 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.274470 | orchestrator | 00:01:21.274 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.274508 | orchestrator | 00:01:21.274 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.274537 | orchestrator | 00:01:21.274 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.274572 | orchestrator | 00:01:21.274 STDOUT terraform:  + name = "testbed-volume-10-node-4" 2025-04-14 00:01:21.274600 | orchestrator | 00:01:21.274 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.274611 | orchestrator | 00:01:21.274 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.274637 | orchestrator | 00:01:21.274 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.274691 | orchestrator | 00:01:21.274 STDOUT terraform:  } 2025-04-14 00:01:21.274702 | orchestrator | 00:01:21.274 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[11] will be created 2025-04-14 00:01:21.274737 | orchestrator | 00:01:21.274 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.274764 | orchestrator | 00:01:21.274 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.274775 | orchestrator | 00:01:21.274 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.274810 | orchestrator | 00:01:21.274 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.274836 | orchestrator | 00:01:21.274 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.274875 | orchestrator | 00:01:21.274 STDOUT terraform:  + name = "testbed-volume-11-node-5" 2025-04-14 00:01:21.274895 | orchestrator | 00:01:21.274 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.274931 | orchestrator | 00:01:21.274 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.274940 | orchestrator | 00:01:21.274 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.274950 | orchestrator | 00:01:21.274 STDOUT terraform:  } 2025-04-14 00:01:21.275071 | orchestrator | 00:01:21.274 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[12] will be created 2025-04-14 00:01:21.275101 | orchestrator | 00:01:21.275 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.275128 | orchestrator | 00:01:21.275 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.275139 | orchestrator | 00:01:21.275 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.275177 | orchestrator | 00:01:21.275 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.275204 | orchestrator | 00:01:21.275 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.275242 | orchestrator | 00:01:21.275 STDOUT terraform:  + name = "testbed-volume-12-node-0" 2025-04-14 00:01:21.275269 | orchestrator | 00:01:21.275 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.275280 | orchestrator | 00:01:21.275 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.275314 | orchestrator | 00:01:21.275 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.275359 | orchestrator | 00:01:21.275 STDOUT terraform:  } 2025-04-14 00:01:21.275371 | orchestrator | 00:01:21.275 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[13] will be created 2025-04-14 00:01:21.275403 | orchestrator | 00:01:21.275 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.275430 | orchestrator | 00:01:21.275 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.275441 | orchestrator | 00:01:21.275 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.275480 | orchestrator | 00:01:21.275 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.275507 | orchestrator | 00:01:21.275 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.275548 | orchestrator | 00:01:21.275 STDOUT terraform:  + name = "testbed-volume-13-node-1" 2025-04-14 00:01:21.275575 | orchestrator | 00:01:21.275 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.275613 | orchestrator | 00:01:21.275 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.275656 | orchestrator | 00:01:21.275 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.275667 | orchestrator | 00:01:21.275 STDOUT terraform:  } 2025-04-14 00:01:21.275678 | orchestrator | 00:01:21.275 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[14] will be created 2025-04-14 00:01:21.275689 | orchestrator | 00:01:21.275 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.275728 | orchestrator | 00:01:21.275 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.275740 | orchestrator | 00:01:21.275 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.275773 | orchestrator | 00:01:21.275 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.275799 | orchestrator | 00:01:21.275 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.275838 | orchestrator | 00:01:21.275 STDOUT terraform:  + name = "testbed-volume-14-node-2" 2025-04-14 00:01:21.275864 | orchestrator | 00:01:21.275 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.275875 | orchestrator | 00:01:21.275 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.275902 | orchestrator | 00:01:21.275 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.275953 | orchestrator | 00:01:21.275 STDOUT terraform:  } 2025-04-14 00:01:21.275965 | orchestrator | 00:01:21.275 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[15] will be created 2025-04-14 00:01:21.276013 | orchestrator | 00:01:21.275 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.276025 | orchestrator | 00:01:21.275 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.276052 | orchestrator | 00:01:21.276 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.276079 | orchestrator | 00:01:21.276 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.276115 | orchestrator | 00:01:21.276 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.276150 | orchestrator | 00:01:21.276 STDOUT terraform:  + name = "testbed-volume-15-node-3" 2025-04-14 00:01:21.276177 | orchestrator | 00:01:21.276 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.276188 | orchestrator | 00:01:21.276 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.276223 | orchestrator | 00:01:21.276 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.276267 | orchestrator | 00:01:21.276 STDOUT terraform:  } 2025-04-14 00:01:21.276279 | orchestrator | 00:01:21.276 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[16] will be created 2025-04-14 00:01:21.276312 | orchestrator | 00:01:21.276 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.276338 | orchestrator | 00:01:21.276 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.276349 | orchestrator | 00:01:21.276 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.276388 | orchestrator | 00:01:21.276 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.276415 | orchestrator | 00:01:21.276 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.276452 | orchestrator | 00:01:21.276 STDOUT terraform:  + name = "testbed-volume-16-node-4" 2025-04-14 00:01:21.276476 | orchestrator | 00:01:21.276 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.276487 | orchestrator | 00:01:21.276 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.276511 | orchestrator | 00:01:21.276 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.276522 | orchestrator | 00:01:21.276 STDOUT terraform:  } 2025-04-14 00:01:21.276573 | orchestrator | 00:01:21.276 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[17] will be created 2025-04-14 00:01:21.276616 | orchestrator | 00:01:21.276 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-04-14 00:01:21.276644 | orchestrator | 00:01:21.276 STDOUT terraform:  + attachment = (known after apply) 2025-04-14 00:01:21.276655 | orchestrator | 00:01:21.276 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.276691 | orchestrator | 00:01:21.276 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.276719 | orchestrator | 00:01:21.276 STDOUT terraform:  + metadata = (known after apply) 2025-04-14 00:01:21.276756 | orchestrator | 00:01:21.276 STDOUT terraform:  + name = "testbed-volume-17-node-5" 2025-04-14 00:01:21.276796 | orchestrator | 00:01:21.276 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.276808 | orchestrator | 00:01:21.276 STDOUT terraform:  + size = 20 2025-04-14 00:01:21.276819 | orchestrator | 00:01:21.276 STDOUT terraform:  + volume_type = "ssd" 2025-04-14 00:01:21.276829 | orchestrator | 00:01:21.276 STDOUT terraform:  } 2025-04-14 00:01:21.276881 | orchestrator | 00:01:21.276 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-04-14 00:01:21.276923 | orchestrator | 00:01:21.276 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-04-14 00:01:21.276959 | orchestrator | 00:01:21.276 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-14 00:01:21.277029 | orchestrator | 00:01:21.276 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-14 00:01:21.277042 | orchestrator | 00:01:21.276 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-14 00:01:21.277085 | orchestrator | 00:01:21.277 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.277096 | orchestrator | 00:01:21.277 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.277122 | orchestrator | 00:01:21.277 STDOUT terraform:  + config_drive = true 2025-04-14 00:01:21.277156 | orchestrator | 00:01:21.277 STDOUT terraform:  + created = (known after apply) 2025-04-14 00:01:21.277191 | orchestrator | 00:01:21.277 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-14 00:01:21.277218 | orchestrator | 00:01:21.277 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-04-14 00:01:21.277229 | orchestrator | 00:01:21.277 STDOUT terraform:  + force_delete = false 2025-04-14 00:01:21.277276 | orchestrator | 00:01:21.277 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.277311 | orchestrator | 00:01:21.277 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.277345 | orchestrator | 00:01:21.277 STDOUT terraform:  + image_name = (known after apply) 2025-04-14 00:01:21.277357 | orchestrator | 00:01:21.277 STDOUT terraform:  + key_pair = "testbed" 2025-04-14 00:01:21.277402 | orchestrator | 00:01:21.277 STDOUT terraform:  + name = "testbed-manager" 2025-04-14 00:01:21.277442 | orchestrator | 00:01:21.277 STDOUT terraform:  + power_state = "active" 2025-04-14 00:01:21.277485 | orchestrator | 00:01:21.277 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.277522 | orchestrator | 00:01:21.277 STDOUT terraform:  + security_groups = (known after apply) 2025-04-14 00:01:21.277539 | orchestrator | 00:01:21.277 STDOUT terraform:  + stop_before_destroy = false 2025-04-14 00:01:21.277577 | orchestrator | 00:01:21.277 STDOUT terraform:  + updated = (known after apply) 2025-04-14 00:01:21.277615 | orchestrator | 00:01:21.277 STDOUT terraform:  + user_data = (known after apply) 2025-04-14 00:01:21.277649 | orchestrator | 00:01:21.277 STDOUT terraform:  + block_device { 2025-04-14 00:01:21.277661 | orchestrator | 00:01:21.277 STDOUT terraform:  + boot_index = 0 2025-04-14 00:01:21.277671 | orchestrator | 00:01:21.277 STDOUT terraform:  + delete_on_termination = false 2025-04-14 00:01:21.277706 | orchestrator | 00:01:21.277 STDOUT terraform:  + destination_type = "volume" 2025-04-14 00:01:21.277757 | orchestrator | 00:01:21.277 STDOUT terraform:  + multiattach = false 2025-04-14 00:01:21.277794 | orchestrator | 00:01:21.277 STDOUT terraform:  + source_type = "volume" 2025-04-14 00:01:21.277807 | orchestrator | 00:01:21.277 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.277843 | orchestrator | 00:01:21.277 STDOUT terraform:  } 2025-04-14 00:01:21.277853 | orchestrator | 00:01:21.277 STDOUT terraform:  + network { 2025-04-14 00:01:21.277864 | orchestrator | 00:01:21.277 STDOUT terraform:  + access_network = false 2025-04-14 00:01:21.277898 | orchestrator | 00:01:21.277 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-14 00:01:21.277910 | orchestrator | 00:01:21.277 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-14 00:01:21.277920 | orchestrator | 00:01:21.277 STDOUT terraform:  + mac = (known after apply) 2025-04-14 00:01:21.277959 | orchestrator | 00:01:21.277 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.277986 | orchestrator | 00:01:21.277 STDOUT terraform:  + port = (known after apply) 2025-04-14 00:01:21.278046 | orchestrator | 00:01:21.277 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.278057 | orchestrator | 00:01:21.278 STDOUT terraform:  } 2025-04-14 00:01:21.278067 | orchestrator | 00:01:21.278 STDOUT terraform:  } 2025-04-14 00:01:21.278110 | orchestrator | 00:01:21.278 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-04-14 00:01:21.278160 | orchestrator | 00:01:21.278 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-14 00:01:21.278196 | orchestrator | 00:01:21.278 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-14 00:01:21.278231 | orchestrator | 00:01:21.278 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-14 00:01:21.278268 | orchestrator | 00:01:21.278 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-14 00:01:21.278304 | orchestrator | 00:01:21.278 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.278315 | orchestrator | 00:01:21.278 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.278342 | orchestrator | 00:01:21.278 STDOUT terraform:  + config_drive = true 2025-04-14 00:01:21.278377 | orchestrator | 00:01:21.278 STDOUT terraform:  + created = (known after apply) 2025-04-14 00:01:21.278411 | orchestrator | 00:01:21.278 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-14 00:01:21.278437 | orchestrator | 00:01:21.278 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-14 00:01:21.278448 | orchestrator | 00:01:21.278 STDOUT terraform:  + force_delete = false 2025-04-14 00:01:21.278495 | orchestrator | 00:01:21.278 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.278531 | orchestrator | 00:01:21.278 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.278566 | orchestrator | 00:01:21.278 STDOUT terraform:  + image_name = (known after apply) 2025-04-14 00:01:21.278577 | orchestrator | 00:01:21.278 STDOUT terraform:  + key_pair = "testbed" 2025-04-14 00:01:21.278618 | orchestrator | 00:01:21.278 STDOUT terraform:  + name = "testbed-node-0" 2025-04-14 00:01:21.278629 | orchestrator | 00:01:21.278 STDOUT terraform:  + power_state = "active" 2025-04-14 00:01:21.278675 | orchestrator | 00:01:21.278 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.278710 | orchestrator | 00:01:21.278 STDOUT terraform:  + security_groups = (known after apply) 2025-04-14 00:01:21.278721 | orchestrator | 00:01:21.278 STDOUT terraform:  + stop_before_destroy = false 2025-04-14 00:01:21.278763 | orchestrator | 00:01:21.278 STDOUT terraform:  + updated = (known after apply) 2025-04-14 00:01:21.278814 | orchestrator | 00:01:21.278 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-14 00:01:21.278847 | orchestrator | 00:01:21.278 STDOUT terraform:  + block_device { 2025-04-14 00:01:21.278859 | orchestrator | 00:01:21.278 STDOUT terraform:  + boot_index = 0 2025-04-14 00:01:21.278873 | orchestrator | 00:01:21.278 STDOUT terraform:  + delete_on_termination = false 2025-04-14 00:01:21.278900 | orchestrator | 00:01:21.278 STDOUT terraform:  + destination_type = "volume" 2025-04-14 00:01:21.278926 | orchestrator | 00:01:21.278 STDOUT terraform:  + multiattach = false 2025-04-14 00:01:21.278953 | orchestrator | 00:01:21.278 STDOUT terraform:  + source_type = "volume" 2025-04-14 00:01:21.279006 | orchestrator | 00:01:21.278 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.279016 | orchestrator | 00:01:21.278 STDOUT terraform:  } 2025-04-14 00:01:21.279026 | orchestrator | 00:01:21.279 STDOUT terraform:  + network { 2025-04-14 00:01:21.279037 | orchestrator | 00:01:21.279 STDOUT terraform:  + access_network = false 2025-04-14 00:01:21.279075 | orchestrator | 00:01:21.279 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-14 00:01:21.279103 | orchestrator | 00:01:21.279 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-14 00:01:21.279129 | orchestrator | 00:01:21.279 STDOUT terraform:  + mac = (known after apply) 2025-04-14 00:01:21.279162 | orchestrator | 00:01:21.279 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.279192 | orchestrator | 00:01:21.279 STDOUT terraform:  + port = (known after apply) 2025-04-14 00:01:21.279220 | orchestrator | 00:01:21.279 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.279234 | orchestrator | 00:01:21.279 STDOUT terraform:  } 2025-04-14 00:01:21.279245 | orchestrator | 00:01:21.279 STDOUT terraform:  } 2025-04-14 00:01:21.279283 | orchestrator | 00:01:21.279 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-04-14 00:01:21.279325 | orchestrator | 00:01:21.279 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-14 00:01:21.279361 | orchestrator | 00:01:21.279 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-14 00:01:21.279395 | orchestrator | 00:01:21.279 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-14 00:01:21.279430 | orchestrator | 00:01:21.279 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-14 00:01:21.279468 | orchestrator | 00:01:21.279 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.279479 | orchestrator | 00:01:21.279 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.279506 | orchestrator | 00:01:21.279 STDOUT terraform:  + config_drive = true 2025-04-14 00:01:21.279541 | orchestrator | 00:01:21.279 STDOUT terraform:  + created = (known after apply) 2025-04-14 00:01:21.279576 | orchestrator | 00:01:21.279 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-14 00:01:21.279606 | orchestrator | 00:01:21.279 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-14 00:01:21.279617 | orchestrator | 00:01:21.279 STDOUT terraform:  + force_delete = false 2025-04-14 00:01:21.279658 | orchestrator | 00:01:21.279 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.279692 | orchestrator | 00:01:21.279 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.279729 | orchestrator | 00:01:21.279 STDOUT terraform:  + image_name = (known after apply) 2025-04-14 00:01:21.279741 | orchestrator | 00:01:21.279 STDOUT terraform:  + key_pair = "testbed" 2025-04-14 00:01:21.279781 | orchestrator | 00:01:21.279 STDOUT terraform:  + name = "testbed-node-1" 2025-04-14 00:01:21.279792 | orchestrator | 00:01:21.279 STDOUT terraform:  + power_state = "active" 2025-04-14 00:01:21.279838 | orchestrator | 00:01:21.279 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.279873 | orchestrator | 00:01:21.279 STDOUT terraform:  + security_groups = (known after apply) 2025-04-14 00:01:21.279884 | orchestrator | 00:01:21.279 STDOUT terraform:  + stop_before_destroy = false 2025-04-14 00:01:21.279927 | orchestrator | 00:01:21.279 STDOUT terraform:  + updated = (known after apply) 2025-04-14 00:01:21.279976 | orchestrator | 00:01:21.279 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-14 00:01:21.280023 | orchestrator | 00:01:21.279 STDOUT terraform:  + block_device { 2025-04-14 00:01:21.280036 | orchestrator | 00:01:21.279 STDOUT terraform:  + boot_index = 0 2025-04-14 00:01:21.280046 | orchestrator | 00:01:21.280 STDOUT terraform:  + delete_on_termination = false 2025-04-14 00:01:21.280080 | orchestrator | 00:01:21.280 STDOUT terraform:  + destination_type = "volume" 2025-04-14 00:01:21.280114 | orchestrator | 00:01:21.280 STDOUT terraform:  + multiattach = false 2025-04-14 00:01:21.280130 | orchestrator | 00:01:21.280 STDOUT terraform:  + source_type = "volume" 2025-04-14 00:01:21.280174 | orchestrator | 00:01:21.280 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.280183 | orchestrator | 00:01:21.280 STDOUT terraform:  } 2025-04-14 00:01:21.280194 | orchestrator | 00:01:21.280 STDOUT terraform:  + network { 2025-04-14 00:01:21.280204 | orchestrator | 00:01:21.280 STDOUT terraform:  + access_network = false 2025-04-14 00:01:21.280243 | orchestrator | 00:01:21.280 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-14 00:01:21.280270 | orchestrator | 00:01:21.280 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-14 00:01:21.280297 | orchestrator | 00:01:21.280 STDOUT terraform:  + mac = (known after apply) 2025-04-14 00:01:21.280324 | orchestrator | 00:01:21.280 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.280351 | orchestrator | 00:01:21.280 STDOUT terraform:  + port = (known after apply) 2025-04-14 00:01:21.280385 | orchestrator | 00:01:21.280 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.280394 | orchestrator | 00:01:21.280 STDOUT terraform:  } 2025-04-14 00:01:21.280405 | orchestrator | 00:01:21.280 STDOUT terraform:  } 2025-04-14 00:01:21.280446 | orchestrator | 00:01:21.280 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-04-14 00:01:21.280487 | orchestrator | 00:01:21.280 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-14 00:01:21.280524 | orchestrator | 00:01:21.280 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-14 00:01:21.280558 | orchestrator | 00:01:21.280 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-14 00:01:21.280597 | orchestrator | 00:01:21.280 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-14 00:01:21.280626 | orchestrator | 00:01:21.280 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.280637 | orchestrator | 00:01:21.280 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.280666 | orchestrator | 00:01:21.280 STDOUT terraform:  + config_drive = true 2025-04-14 00:01:21.280765 | orchestrator | 00:01:21.280 STDOUT terraform:  + created = (known after apply) 2025-04-14 00:01:21.280793 | orchestrator | 00:01:21.280 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-14 00:01:21.280805 | orchestrator | 00:01:21.280 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-14 00:01:21.280847 | orchestrator | 00:01:21.280 STDOUT terraform:  + force_delete = false 2025-04-14 00:01:21.280858 | orchestrator | 00:01:21.280 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.280884 | orchestrator | 00:01:21.280 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.280919 | orchestrator | 00:01:21.280 STDOUT terraform:  + image_name = (known after apply) 2025-04-14 00:01:21.280930 | orchestrator | 00:01:21.280 STDOUT terraform:  + key_pair = "testbed" 2025-04-14 00:01:21.280973 | orchestrator | 00:01:21.280 STDOUT terraform:  + name = "testbed-node-2" 2025-04-14 00:01:21.280990 | orchestrator | 00:01:21.280 STDOUT terraform:  + power_state = "active" 2025-04-14 00:01:21.281222 | orchestrator | 00:01:21.280 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.281325 | orchestrator | 00:01:21.281 STDOUT terraform:  + security_groups = (known after apply) 2025-04-14 00:01:21.281346 | orchestrator | 00:01:21.281 STDOUT terraform:  + stop_before_destroy = false 2025-04-14 00:01:21.281362 | orchestrator | 00:01:21.281 STDOUT terraform:  + updated = (known after apply) 2025-04-14 00:01:21.281377 | orchestrator | 00:01:21.281 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-14 00:01:21.281398 | orchestrator | 00:01:21.281 STDOUT terraform:  + block_device { 2025-04-14 00:01:21.281413 | orchestrator | 00:01:21.281 STDOUT terraform:  + boot_index = 0 2025-04-14 00:01:21.281428 | orchestrator | 00:01:21.281 STDOUT terraform:  + delete_on_termination = false 2025-04-14 00:01:21.281443 | orchestrator | 00:01:21.281 STDOUT terraform:  + destination_type = "volume" 2025-04-14 00:01:21.281457 | orchestrator | 00:01:21.281 STDOUT terraform:  + multiattach = false 2025-04-14 00:01:21.281471 | orchestrator | 00:01:21.281 STDOUT terraform:  + source_type = "volume" 2025-04-14 00:01:21.281485 | orchestrator | 00:01:21.281 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.281500 | orchestrator | 00:01:21.281 STDOUT terraform:  } 2025-04-14 00:01:21.281515 | orchestrator | 00:01:21.281 STDOUT terraform:  + network { 2025-04-14 00:01:21.281529 | orchestrator | 00:01:21.281 STDOUT terraform:  + access_network = false 2025-04-14 00:01:21.281548 | orchestrator | 00:01:21.281 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-14 00:01:21.281563 | orchestrator | 00:01:21.281 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-14 00:01:21.281577 | orchestrator | 00:01:21.281 STDOUT terraform:  + mac = (known after apply) 2025-04-14 00:01:21.281591 | orchestrator | 00:01:21.281 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.281606 | orchestrator | 00:01:21.281 STDOUT terraform:  + port = (known after apply) 2025-04-14 00:01:21.281620 | orchestrator | 00:01:21.281 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.281637 | orchestrator | 00:01:21.281 STDOUT terraform:  } 2025-04-14 00:01:21.281652 | orchestrator | 00:01:21.281 STDOUT terraform:  } 2025-04-14 00:01:21.281667 | orchestrator | 00:01:21.281 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-04-14 00:01:21.281685 | orchestrator | 00:01:21.281 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-14 00:01:21.281759 | orchestrator | 00:01:21.281 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-14 00:01:21.281780 | orchestrator | 00:01:21.281 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-14 00:01:21.281796 | orchestrator | 00:01:21.281 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-14 00:01:21.281811 | orchestrator | 00:01:21.281 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.281865 | orchestrator | 00:01:21.281 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.281882 | orchestrator | 00:01:21.281 STDOUT terraform:  + config_drive = true 2025-04-14 00:01:21.281896 | orchestrator | 00:01:21.281 STDOUT terraform:  + created = (known after apply) 2025-04-14 00:01:21.281915 | orchestrator | 00:01:21.281 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-14 00:01:21.281943 | orchestrator | 00:01:21.281 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-14 00:01:21.281962 | orchestrator | 00:01:21.281 STDOUT terraform:  + force_delete = false 2025-04-14 00:01:21.282095 | orchestrator | 00:01:21.281 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.282130 | orchestrator | 00:01:21.281 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.282146 | orchestrator | 00:01:21.281 STDOUT terraform:  + image_name = (known after apply) 2025-04-14 00:01:21.282161 | orchestrator | 00:01:21.282 STDOUT terraform:  + key_pair = "testbed" 2025-04-14 00:01:21.282176 | orchestrator | 00:01:21.282 STDOUT terraform:  + name = "testbed-node-3" 2025-04-14 00:01:21.282191 | orchestrator | 00:01:21.282 STDOUT terraform:  + power_state = "active" 2025-04-14 00:01:21.282209 | orchestrator | 00:01:21.282 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.282224 | orchestrator | 00:01:21.282 STDOUT terraform:  + security_groups = (known after apply) 2025-04-14 00:01:21.282238 | orchestrator | 00:01:21.282 STDOUT terraform:  + stop_before_destroy = false 2025-04-14 00:01:21.282256 | orchestrator | 00:01:21.282 STDOUT terraform:  + updated = (known after apply) 2025-04-14 00:01:21.282274 | orchestrator | 00:01:21.282 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-14 00:01:21.282293 | orchestrator | 00:01:21.282 STDOUT terraform:  + block_device { 2025-04-14 00:01:21.282311 | orchestrator | 00:01:21.282 STDOUT terraform:  + boot_index = 0 2025-04-14 00:01:21.282329 | orchestrator | 00:01:21.282 STDOUT terraform:  + delete_on_termination = false 2025-04-14 00:01:21.282363 | orchestrator | 00:01:21.282 STDOUT terraform:  + destination_type = "volume" 2025-04-14 00:01:21.282381 | orchestrator | 00:01:21.282 STDOUT terraform:  + multiattach = false 2025-04-14 00:01:21.282429 | orchestrator | 00:01:21.282 STDOUT terraform:  + source_type = "volume" 2025-04-14 00:01:21.282581 | orchestrator | 00:01:21.282 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.282617 | orchestrator | 00:01:21.282 STDOUT terraform:  } 2025-04-14 00:01:21.282629 | orchestrator | 00:01:21.282 STDOUT terraform:  + network { 2025-04-14 00:01:21.282639 | orchestrator | 00:01:21.282 STDOUT terraform:  + access_network = false 2025-04-14 00:01:21.282648 | orchestrator | 00:01:21.282 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-14 00:01:21.282659 | orchestrator | 00:01:21.282 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-14 00:01:21.282668 | orchestrator | 00:01:21.282 STDOUT terraform:  + mac = (known after apply) 2025-04-14 00:01:21.282685 | orchestrator | 00:01:21.282 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.282696 | orchestrator | 00:01:21.282 STDOUT terraform:  + port = (known after apply) 2025-04-14 00:01:21.282704 | orchestrator | 00:01:21.282 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.282714 | orchestrator | 00:01:21.282 STDOUT terraform:  } 2025-04-14 00:01:21.282765 | orchestrator | 00:01:21.282 STDOUT terraform:  } 2025-04-14 00:01:21.282777 | orchestrator | 00:01:21.282 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-04-14 00:01:21.282805 | orchestrator | 00:01:21.282 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-14 00:01:21.282825 | orchestrator | 00:01:21.282 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-14 00:01:21.282901 | orchestrator | 00:01:21.282 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-14 00:01:21.282928 | orchestrator | 00:01:21.282 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-14 00:01:21.282939 | orchestrator | 00:01:21.282 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.282971 | orchestrator | 00:01:21.282 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.282983 | orchestrator | 00:01:21.282 STDOUT terraform:  + config_drive = true 2025-04-14 00:01:21.283004 | orchestrator | 00:01:21.282 STDOUT terraform:  + created = (known after apply) 2025-04-14 00:01:21.283041 | orchestrator | 00:01:21.282 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-14 00:01:21.283070 | orchestrator | 00:01:21.283 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-14 00:01:21.283081 | orchestrator | 00:01:21.283 STDOUT terraform:  + force_delete = false 2025-04-14 00:01:21.283124 | orchestrator | 00:01:21.283 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.283159 | orchestrator | 00:01:21.283 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.283193 | orchestrator | 00:01:21.283 STDOUT terraform:  + image_name = (known after apply) 2025-04-14 00:01:21.283204 | orchestrator | 00:01:21.283 STDOUT terraform:  + key_pair = "testbed" 2025-04-14 00:01:21.283244 | orchestrator | 00:01:21.283 STDOUT terraform:  + name = "testbed-node-4" 2025-04-14 00:01:21.283256 | orchestrator | 00:01:21.283 STDOUT terraform:  + power_state = "active" 2025-04-14 00:01:21.283303 | orchestrator | 00:01:21.283 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.283336 | orchestrator | 00:01:21.283 STDOUT terraform:  + security_groups = (known after apply) 2025-04-14 00:01:21.283348 | orchestrator | 00:01:21.283 STDOUT terraform:  + stop_before_destroy = false 2025-04-14 00:01:21.283390 | orchestrator | 00:01:21.283 STDOUT terraform:  + updated = (known after apply) 2025-04-14 00:01:21.283440 | orchestrator | 00:01:21.283 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-14 00:01:21.283474 | orchestrator | 00:01:21.283 STDOUT terraform:  + block_device { 2025-04-14 00:01:21.283486 | orchestrator | 00:01:21.283 STDOUT terraform:  + boot_index = 0 2025-04-14 00:01:21.283503 | orchestrator | 00:01:21.283 STDOUT terraform:  + delete_on_termination = false 2025-04-14 00:01:21.283538 | orchestrator | 00:01:21.283 STDOUT terraform:  + destination_type = "volume" 2025-04-14 00:01:21.283550 | orchestrator | 00:01:21.283 STDOUT terraform:  + multiattach = false 2025-04-14 00:01:21.283586 | orchestrator | 00:01:21.283 STDOUT terraform:  + source_type = "volume" 2025-04-14 00:01:21.283622 | orchestrator | 00:01:21.283 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.283631 | orchestrator | 00:01:21.283 STDOUT terraform:  } 2025-04-14 00:01:21.283642 | orchestrator | 00:01:21.283 STDOUT terraform:  + network { 2025-04-14 00:01:21.283652 | orchestrator | 00:01:21.283 STDOUT terraform:  + access_network = false 2025-04-14 00:01:21.283695 | orchestrator | 00:01:21.283 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-14 00:01:21.283724 | orchestrator | 00:01:21.283 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-14 00:01:21.283759 | orchestrator | 00:01:21.283 STDOUT terraform:  + mac = (known after apply) 2025-04-14 00:01:21.283772 | orchestrator | 00:01:21.283 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.283814 | orchestrator | 00:01:21.283 STDOUT terraform:  + port = (known after apply) 2025-04-14 00:01:21.286240 | orchestrator | 00:01:21.283 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.286331 | orchestrator | 00:01:21.283 STDOUT terraform:  } 2025-04-14 00:01:21.286351 | orchestrator | 00:01:21.283 STDOUT terraform:  } 2025-04-14 00:01:21.286367 | orchestrator | 00:01:21.283 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-04-14 00:01:21.286382 | orchestrator | 00:01:21.283 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-04-14 00:01:21.286396 | orchestrator | 00:01:21.283 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-04-14 00:01:21.286411 | orchestrator | 00:01:21.283 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-04-14 00:01:21.286427 | orchestrator | 00:01:21.283 STDOUT terraform:  + all_metadata = (known after apply) 2025-04-14 00:01:21.286442 | orchestrator | 00:01:21.284 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.286457 | orchestrator | 00:01:21.284 STDOUT terraform:  + availability_zone = "nova" 2025-04-14 00:01:21.286471 | orchestrator | 00:01:21.284 STDOUT terraform:  + config_drive = true 2025-04-14 00:01:21.286485 | orchestrator | 00:01:21.284 STDOUT terraform:  + created = (known after apply) 2025-04-14 00:01:21.286500 | orchestrator | 00:01:21.284 STDOUT terraform:  + flavor_id = (known after apply) 2025-04-14 00:01:21.286522 | orchestrator | 00:01:21.284 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-04-14 00:01:21.286538 | orchestrator | 00:01:21.284 STDOUT terraform:  + force_delete = false 2025-04-14 00:01:21.286552 | orchestrator | 00:01:21.284 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.286566 | orchestrator | 00:01:21.284 STDOUT terraform:  + image_id = (known after apply) 2025-04-14 00:01:21.286598 | orchestrator | 00:01:21.284 STDOUT terraform:  + image_name = (known after apply) 2025-04-14 00:01:21.286612 | orchestrator | 00:01:21.284 STDOUT terraform:  + key_pair = "testbed" 2025-04-14 00:01:21.286626 | orchestrator | 00:01:21.284 STDOUT terraform:  + name = "testbed-node-5" 2025-04-14 00:01:21.286641 | orchestrator | 00:01:21.284 STDOUT terraform:  + power_state = "active" 2025-04-14 00:01:21.286655 | orchestrator | 00:01:21.284 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.286669 | orchestrator | 00:01:21.284 STDOUT terraform:  + security_groups = (known after apply) 2025-04-14 00:01:21.286684 | orchestrator | 00:01:21.284 STDOUT terraform:  + stop_before_destroy = false 2025-04-14 00:01:21.286698 | orchestrator | 00:01:21.284 STDOUT terraform:  + updated = (known after apply) 2025-04-14 00:01:21.286712 | orchestrator | 00:01:21.284 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-04-14 00:01:21.286727 | orchestrator | 00:01:21.284 STDOUT terraform:  + block_device { 2025-04-14 00:01:21.286742 | orchestrator | 00:01:21.284 STDOUT terraform:  + boot_index = 0 2025-04-14 00:01:21.286756 | orchestrator | 00:01:21.284 STDOUT terraform:  + delete_on_termination = false 2025-04-14 00:01:21.286770 | orchestrator | 00:01:21.284 STDOUT terraform:  + destination_type = "volume" 2025-04-14 00:01:21.286785 | orchestrator | 00:01:21.284 STDOUT terraform:  + multiattach = false 2025-04-14 00:01:21.286799 | orchestrator | 00:01:21.284 STDOUT terraform:  + source_type = "volume" 2025-04-14 00:01:21.286813 | orchestrator | 00:01:21.284 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.286827 | orchestrator | 00:01:21.284 STDOUT terraform:  } 2025-04-14 00:01:21.286842 | orchestrator | 00:01:21.284 STDOUT terraform:  + network { 2025-04-14 00:01:21.286856 | orchestrator | 00:01:21.284 STDOUT terraform:  + access_network = false 2025-04-14 00:01:21.286871 | orchestrator | 00:01:21.284 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-04-14 00:01:21.286885 | orchestrator | 00:01:21.284 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-04-14 00:01:21.286909 | orchestrator | 00:01:21.284 STDOUT terraform:  + mac = (known after apply) 2025-04-14 00:01:21.286926 | orchestrator | 00:01:21.284 STDOUT terraform:  + name = (known after apply) 2025-04-14 00:01:21.286940 | orchestrator | 00:01:21.284 STDOUT terraform:  + port = (known after apply) 2025-04-14 00:01:21.286954 | orchestrator | 00:01:21.284 STDOUT terraform:  + uuid = (known after apply) 2025-04-14 00:01:21.286969 | orchestrator | 00:01:21.284 STDOUT terraform:  } 2025-04-14 00:01:21.286983 | orchestrator | 00:01:21.284 STDOUT terraform:  } 2025-04-14 00:01:21.287072 | orchestrator | 00:01:21.284 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-04-14 00:01:21.287091 | orchestrator | 00:01:21.284 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-04-14 00:01:21.287106 | orchestrator | 00:01:21.284 STDOUT terraform:  + fingerprint = (known after apply) 2025-04-14 00:01:21.287121 | orchestrator | 00:01:21.284 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.287144 | orchestrator | 00:01:21.285 STDOUT terraform:  + name = "testbed" 2025-04-14 00:01:21.287160 | orchestrator | 00:01:21.285 STDOUT terraform:  + private_key = (sensitive value) 2025-04-14 00:01:21.287174 | orchestrator | 00:01:21.285 STDOUT terraform:  + public_key = (known after apply) 2025-04-14 00:01:21.287189 | orchestrator | 00:01:21.285 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.287204 | orchestrator | 00:01:21.285 STDOUT terraform:  + user_id = (known after apply) 2025-04-14 00:01:21.287219 | orchestrator | 00:01:21.285 STDOUT terraform:  } 2025-04-14 00:01:21.287234 | orchestrator | 00:01:21.285 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-04-14 00:01:21.287250 | orchestrator | 00:01:21.285 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.287265 | orchestrator | 00:01:21.285 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.287280 | orchestrator | 00:01:21.285 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.287295 | orchestrator | 00:01:21.285 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.287310 | orchestrator | 00:01:21.285 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.287325 | orchestrator | 00:01:21.285 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.287341 | orchestrator | 00:01:21.285 STDOUT terraform:  } 2025-04-14 00:01:21.287357 | orchestrator | 00:01:21.285 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-04-14 00:01:21.287372 | orchestrator | 00:01:21.285 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.287388 | orchestrator | 00:01:21.285 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.287403 | orchestrator | 00:01:21.285 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.287418 | orchestrator | 00:01:21.285 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.287433 | orchestrator | 00:01:21.285 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.287453 | orchestrator | 00:01:21.285 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.287468 | orchestrator | 00:01:21.285 STDOUT terraform:  } 2025-04-14 00:01:21.287483 | orchestrator | 00:01:21.285 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-04-14 00:01:21.287498 | orchestrator | 00:01:21.285 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.287514 | orchestrator | 00:01:21.285 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.287529 | orchestrator | 00:01:21.285 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.287544 | orchestrator | 00:01:21.285 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.287560 | orchestrator | 00:01:21.285 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.287587 | orchestrator | 00:01:21.285 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.287610 | orchestrator | 00:01:21.285 STDOUT terraform:  } 2025-04-14 00:01:21.287625 | orchestrator | 00:01:21.285 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-04-14 00:01:21.287640 | orchestrator | 00:01:21.285 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.287656 | orchestrator | 00:01:21.285 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.287670 | orchestrator | 00:01:21.285 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.287685 | orchestrator | 00:01:21.285 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.287700 | orchestrator | 00:01:21.285 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.287715 | orchestrator | 00:01:21.286 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.287729 | orchestrator | 00:01:21.286 STDOUT terraform:  } 2025-04-14 00:01:21.287744 | orchestrator | 00:01:21.286 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-04-14 00:01:21.287759 | orchestrator | 00:01:21.286 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.287774 | orchestrator | 00:01:21.286 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.287789 | orchestrator | 00:01:21.286 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.287803 | orchestrator | 00:01:21.286 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.287818 | orchestrator | 00:01:21.286 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.287832 | orchestrator | 00:01:21.286 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.287847 | orchestrator | 00:01:21.286 STDOUT terraform:  } 2025-04-14 00:01:21.287862 | orchestrator | 00:01:21.286 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-04-14 00:01:21.287877 | orchestrator | 00:01:21.286 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.287892 | orchestrator | 00:01:21.286 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.287907 | orchestrator | 00:01:21.286 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.287921 | orchestrator | 00:01:21.286 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.287936 | orchestrator | 00:01:21.286 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.287950 | orchestrator | 00:01:21.286 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.287965 | orchestrator | 00:01:21.286 STDOUT terraform:  } 2025-04-14 00:01:21.287980 | orchestrator | 00:01:21.286 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-04-14 00:01:21.288030 | orchestrator | 00:01:21.286 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.288046 | orchestrator | 00:01:21.286 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.288061 | orchestrator | 00:01:21.286 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.288076 | orchestrator | 00:01:21.286 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.288099 | orchestrator | 00:01:21.286 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.288114 | orchestrator | 00:01:21.286 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.288129 | orchestrator | 00:01:21.286 STDOUT terraform:  } 2025-04-14 00:01:21.288144 | orchestrator | 00:01:21.286 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-04-14 00:01:21.288159 | orchestrator | 00:01:21.286 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.288174 | orchestrator | 00:01:21.286 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.288196 | orchestrator | 00:01:21.286 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.288665 | orchestrator | 00:01:21.286 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.288734 | orchestrator | 00:01:21.286 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.288751 | orchestrator | 00:01:21.286 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.288766 | orchestrator | 00:01:21.286 STDOUT terraform:  } 2025-04-14 00:01:21.288781 | orchestrator | 00:01:21.286 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-04-14 00:01:21.288796 | orchestrator | 00:01:21.286 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.288810 | orchestrator | 00:01:21.287 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.288825 | orchestrator | 00:01:21.287 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.288838 | orchestrator | 00:01:21.287 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.288851 | orchestrator | 00:01:21.287 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.288863 | orchestrator | 00:01:21.287 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.288876 | orchestrator | 00:01:21.287 STDOUT terraform:  } 2025-04-14 00:01:21.288889 | orchestrator | 00:01:21.287 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[9] will be created 2025-04-14 00:01:21.288902 | orchestrator | 00:01:21.287 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.288915 | orchestrator | 00:01:21.287 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.288928 | orchestrator | 00:01:21.287 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.288941 | orchestrator | 00:01:21.287 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.288953 | orchestrator | 00:01:21.287 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.288966 | orchestrator | 00:01:21.287 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.288979 | orchestrator | 00:01:21.287 STDOUT terraform:  } 2025-04-14 00:01:21.289018 | orchestrator | 00:01:21.287 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[10] will be created 2025-04-14 00:01:21.289032 | orchestrator | 00:01:21.287 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.289055 | orchestrator | 00:01:21.287 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.289068 | orchestrator | 00:01:21.287 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.289081 | orchestrator | 00:01:21.287 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.289094 | orchestrator | 00:01:21.287 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.289106 | orchestrator | 00:01:21.287 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.289119 | orchestrator | 00:01:21.287 STDOUT terraform:  } 2025-04-14 00:01:21.289132 | orchestrator | 00:01:21.287 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[11] will be created 2025-04-14 00:01:21.289145 | orchestrator | 00:01:21.287 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.289157 | orchestrator | 00:01:21.287 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.289170 | orchestrator | 00:01:21.287 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.289190 | orchestrator | 00:01:21.287 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.289203 | orchestrator | 00:01:21.288 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.289216 | orchestrator | 00:01:21.288 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.289228 | orchestrator | 00:01:21.288 STDOUT terraform:  } 2025-04-14 00:01:21.289241 | orchestrator | 00:01:21.288 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[12] will be created 2025-04-14 00:01:21.289254 | orchestrator | 00:01:21.288 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.289266 | orchestrator | 00:01:21.288 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.289279 | orchestrator | 00:01:21.288 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.289292 | orchestrator | 00:01:21.288 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.289305 | orchestrator | 00:01:21.288 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.289318 | orchestrator | 00:01:21.288 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.289331 | orchestrator | 00:01:21.288 STDOUT terraform:  } 2025-04-14 00:01:21.289343 | orchestrator | 00:01:21.288 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[13] will be created 2025-04-14 00:01:21.289356 | orchestrator | 00:01:21.288 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.289368 | orchestrator | 00:01:21.288 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.289391 | orchestrator | 00:01:21.288 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.289404 | orchestrator | 00:01:21.289 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.289416 | orchestrator | 00:01:21.289 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.289429 | orchestrator | 00:01:21.289 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.289448 | orchestrator | 00:01:21.289 STDOUT terraform:  } 2025-04-14 00:01:21.289461 | orchestrator | 00:01:21.289 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[14] will be created 2025-04-14 00:01:21.289477 | orchestrator | 00:01:21.289 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.289491 | orchestrator | 00:01:21.289 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.289503 | orchestrator | 00:01:21.289 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.289516 | orchestrator | 00:01:21.289 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.289528 | orchestrator | 00:01:21.289 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.289541 | orchestrator | 00:01:21.289 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.289558 | orchestrator | 00:01:21.289 STDOUT terraform:  } 2025-04-14 00:01:21.289571 | orchestrator | 00:01:21.289 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[15] will be created 2025-04-14 00:01:21.289584 | orchestrator | 00:01:21.289 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.289597 | orchestrator | 00:01:21.289 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.289613 | orchestrator | 00:01:21.289 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.289625 | orchestrator | 00:01:21.289 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.289638 | orchestrator | 00:01:21.289 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.289650 | orchestrator | 00:01:21.289 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.289663 | orchestrator | 00:01:21.289 STDOUT terraform:  } 2025-04-14 00:01:21.289679 | orchestrator | 00:01:21.289 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[16] will be created 2025-04-14 00:01:21.289692 | orchestrator | 00:01:21.289 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.289708 | orchestrator | 00:01:21.289 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.289724 | orchestrator | 00:01:21.289 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.289740 | orchestrator | 00:01:21.289 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.289799 | orchestrator | 00:01:21.289 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.289814 | orchestrator | 00:01:21.289 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.289830 | orchestrator | 00:01:21.289 STDOUT terraform:  } 2025-04-14 00:01:21.289861 | orchestrator | 00:01:21.289 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[17] will be created 2025-04-14 00:01:21.289911 | orchestrator | 00:01:21.289 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-04-14 00:01:21.289929 | orchestrator | 00:01:21.289 STDOUT terraform:  + device = (known after apply) 2025-04-14 00:01:21.289965 | orchestrator | 00:01:21.289 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.289989 | orchestrator | 00:01:21.289 STDOUT terraform:  + instance_id = (known after apply) 2025-04-14 00:01:21.290057 | orchestrator | 00:01:21.289 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.290074 | orchestrator | 00:01:21.290 STDOUT terraform:  + volume_id = (known after apply) 2025-04-14 00:01:21.290090 | orchestrator | 00:01:21.290 STDOUT terraform:  } 2025-04-14 00:01:21.290166 | orchestrator | 00:01:21.290 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-04-14 00:01:21.290207 | orchestrator | 00:01:21.290 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-04-14 00:01:21.290224 | orchestrator | 00:01:21.290 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-14 00:01:21.290261 | orchestrator | 00:01:21.290 STDOUT terraform:  + floating_ip = (known after apply) 2025-04-14 00:01:21.290278 | orchestrator | 00:01:21.290 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.290315 | orchestrator | 00:01:21.290 STDOUT terraform:  + port_id = (known after apply) 2025-04-14 00:01:21.290337 | orchestrator | 00:01:21.290 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.290353 | orchestrator | 00:01:21.290 STDOUT terraform:  } 2025-04-14 00:01:21.290399 | orchestrator | 00:01:21.290 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-04-14 00:01:21.290448 | orchestrator | 00:01:21.290 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-04-14 00:01:21.290465 | orchestrator | 00:01:21.290 STDOUT terraform:  + address = (known after apply) 2025-04-14 00:01:21.290481 | orchestrator | 00:01:21.290 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.290514 | orchestrator | 00:01:21.290 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-14 00:01:21.290532 | orchestrator | 00:01:21.290 STDOUT terraform:  + dns_name = (known after apply) 2025-04-14 00:01:21.290568 | orchestrator | 00:01:21.290 STDOUT terraform:  + fixed_ip = (known after apply) 2025-04-14 00:01:21.290585 | orchestrator | 00:01:21.290 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.290600 | orchestrator | 00:01:21.290 STDOUT terraform:  + pool = "public" 2025-04-14 00:01:21.290617 | orchestrator | 00:01:21.290 STDOUT terraform:  + port_id = (known after apply) 2025-04-14 00:01:21.290633 | orchestrator | 00:01:21.290 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.290668 | orchestrator | 00:01:21.290 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.290685 | orchestrator | 00:01:21.290 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.290739 | orchestrator | 00:01:21.290 STDOUT terraform:  } 2025-04-14 00:01:21.290756 | orchestrator | 00:01:21.290 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-04-14 00:01:21.290772 | orchestrator | 00:01:21.290 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-04-14 00:01:21.290818 | orchestrator | 00:01:21.290 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.290855 | orchestrator | 00:01:21.290 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.290880 | orchestrator | 00:01:21.290 STDOUT terraform:  + availability_zone_hints = [ 2025-04-14 00:01:21.290893 | orchestrator | 00:01:21.290 STDOUT terraform:  + "nova", 2025-04-14 00:01:21.290909 | orchestrator | 00:01:21.290 STDOUT terraform:  ] 2025-04-14 00:01:21.290924 | orchestrator | 00:01:21.290 STDOUT terraform:  + dns_domain = (known after apply) 2025-04-14 00:01:21.290963 | orchestrator | 00:01:21.290 STDOUT terraform:  + external = (known after apply) 2025-04-14 00:01:21.291051 | orchestrator | 00:01:21.290 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.291068 | orchestrator | 00:01:21.290 STDOUT terraform:  + mtu = (known after apply) 2025-04-14 00:01:21.291084 | orchestrator | 00:01:21.291 STDOUT terraform:  + name = "net-testbed-management" 2025-04-14 00:01:21.291121 | orchestrator | 00:01:21.291 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-14 00:01:21.291139 | orchestrator | 00:01:21.291 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-14 00:01:21.291184 | orchestrator | 00:01:21.291 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.291220 | orchestrator | 00:01:21.291 STDOUT terraform:  + shared = (known after apply) 2025-04-14 00:01:21.291255 | orchestrator | 00:01:21.291 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.291292 | orchestrator | 00:01:21.291 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-04-14 00:01:21.291306 | orchestrator | 00:01:21.291 STDOUT terraform:  + segments (known after apply) 2025-04-14 00:01:21.291319 | orchestrator | 00:01:21.291 STDOUT terraform:  } 2025-04-14 00:01:21.291369 | orchestrator | 00:01:21.291 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-04-14 00:01:21.291416 | orchestrator | 00:01:21.291 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-04-14 00:01:21.291452 | orchestrator | 00:01:21.291 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.291488 | orchestrator | 00:01:21.291 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-14 00:01:21.291523 | orchestrator | 00:01:21.291 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-14 00:01:21.291561 | orchestrator | 00:01:21.291 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.291597 | orchestrator | 00:01:21.291 STDOUT terraform:  + device_id = (known after apply) 2025-04-14 00:01:21.291632 | orchestrator | 00:01:21.291 STDOUT terraform:  + device_owner = (known after apply) 2025-04-14 00:01:21.291668 | orchestrator | 00:01:21.291 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-14 00:01:21.291705 | orchestrator | 00:01:21.291 STDOUT terraform:  + dns_name = (known after apply) 2025-04-14 00:01:21.291743 | orchestrator | 00:01:21.291 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.291779 | orchestrator | 00:01:21.291 STDOUT terraform:  + mac_address = (known after apply) 2025-04-14 00:01:21.291822 | orchestrator | 00:01:21.291 STDOUT terraform:  + network_id = (known after apply) 2025-04-14 00:01:21.291842 | orchestrator | 00:01:21.291 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-14 00:01:21.291885 | orchestrator | 00:01:21.291 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-14 00:01:21.291921 | orchestrator | 00:01:21.291 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.291959 | orchestrator | 00:01:21.291 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-14 00:01:21.292007 | orchestrator | 00:01:21.291 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.292042 | orchestrator | 00:01:21.291 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.292074 | orchestrator | 00:01:21.292 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-14 00:01:21.292100 | orchestrator | 00:01:21.292 STDOUT terraform:  } 2025-04-14 00:01:21.292114 | orchestrator | 00:01:21.292 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.292127 | orchestrator | 00:01:21.292 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-14 00:01:21.292140 | orchestrator | 00:01:21.292 STDOUT terraform:  } 2025-04-14 00:01:21.292153 | orchestrator | 00:01:21.292 STDOUT terraform:  + binding (known after apply) 2025-04-14 00:01:21.292167 | orchestrator | 00:01:21.292 STDOUT terraform:  + fixed_ip { 2025-04-14 00:01:21.292196 | orchestrator | 00:01:21.292 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-04-14 00:01:21.292226 | orchestrator | 00:01:21.292 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.292238 | orchestrator | 00:01:21.292 STDOUT terraform:  } 2025-04-14 00:01:21.292251 | orchestrator | 00:01:21.292 STDOUT terraform:  } 2025-04-14 00:01:21.292347 | orchestrator | 00:01:21.292 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-04-14 00:01:21.292380 | orchestrator | 00:01:21.292 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-14 00:01:21.292416 | orchestrator | 00:01:21.292 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.292451 | orchestrator | 00:01:21.292 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-14 00:01:21.292487 | orchestrator | 00:01:21.292 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-14 00:01:21.292523 | orchestrator | 00:01:21.292 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.292559 | orchestrator | 00:01:21.292 STDOUT terraform:  + device_id = (known after apply) 2025-04-14 00:01:21.292596 | orchestrator | 00:01:21.292 STDOUT terraform:  + device_owner = (known after apply) 2025-04-14 00:01:21.292632 | orchestrator | 00:01:21.292 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-14 00:01:21.292668 | orchestrator | 00:01:21.292 STDOUT terraform:  + dns_name = (known after apply) 2025-04-14 00:01:21.292705 | orchestrator | 00:01:21.292 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.292741 | orchestrator | 00:01:21.292 STDOUT terraform:  + mac_address = (known after apply) 2025-04-14 00:01:21.292777 | orchestrator | 00:01:21.292 STDOUT terraform:  + network_id = (known after apply) 2025-04-14 00:01:21.292812 | orchestrator | 00:01:21.292 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-14 00:01:21.292849 | orchestrator | 00:01:21.292 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-14 00:01:21.292885 | orchestrator | 00:01:21.292 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.292922 | orchestrator | 00:01:21.292 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-14 00:01:21.292959 | orchestrator | 00:01:21.292 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.292973 | orchestrator | 00:01:21.292 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.293037 | orchestrator | 00:01:21.292 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-14 00:01:21.293069 | orchestrator | 00:01:21.293 STDOUT terraform:  } 2025-04-14 00:01:21.293081 | orchestrator | 00:01:21.293 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.293094 | orchestrator | 00:01:21.293 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-14 00:01:21.293122 | orchestrator | 00:01:21.293 STDOUT terraform:  } 2025-04-14 00:01:21.293134 | orchestrator | 00:01:21.293 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.293147 | orchestrator | 00:01:21.293 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-14 00:01:21.293175 | orchestrator | 00:01:21.293 STDOUT terraform:  } 2025-04-14 00:01:21.293187 | orchestrator | 00:01:21.293 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.293200 | orchestrator | 00:01:21.293 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-14 00:01:21.293211 | orchestrator | 00:01:21.293 STDOUT terraform:  } 2025-04-14 00:01:21.293227 | orchestrator | 00:01:21.293 STDOUT terraform:  + binding (known after apply) 2025-04-14 00:01:21.293241 | orchestrator | 00:01:21.293 STDOUT terraform:  + fixed_ip { 2025-04-14 00:01:21.293268 | orchestrator | 00:01:21.293 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-04-14 00:01:21.293281 | orchestrator | 00:01:21.293 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.293331 | orchestrator | 00:01:21.293 STDOUT terraform:  } 2025-04-14 00:01:21.293340 | orchestrator | 00:01:21.293 STDOUT terraform:  } 2025-04-14 00:01:21.293352 | orchestrator | 00:01:21.293 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-04-14 00:01:21.293378 | orchestrator | 00:01:21.293 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-14 00:01:21.293415 | orchestrator | 00:01:21.293 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.293450 | orchestrator | 00:01:21.293 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-14 00:01:21.293486 | orchestrator | 00:01:21.293 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-14 00:01:21.293528 | orchestrator | 00:01:21.293 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.293559 | orchestrator | 00:01:21.293 STDOUT terraform:  + device_id = (known after apply) 2025-04-14 00:01:21.293594 | orchestrator | 00:01:21.293 STDOUT terraform:  + device_owner = (known after apply) 2025-04-14 00:01:21.293629 | orchestrator | 00:01:21.293 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-14 00:01:21.293666 | orchestrator | 00:01:21.293 STDOUT terraform:  + dns_name = (known after apply) 2025-04-14 00:01:21.293703 | orchestrator | 00:01:21.293 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.293739 | orchestrator | 00:01:21.293 STDOUT terraform:  + mac_address = (known after apply) 2025-04-14 00:01:21.293775 | orchestrator | 00:01:21.293 STDOUT terraform:  + network_id = (known after apply) 2025-04-14 00:01:21.293810 | orchestrator | 00:01:21.293 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-14 00:01:21.293847 | orchestrator | 00:01:21.293 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-14 00:01:21.293882 | orchestrator | 00:01:21.293 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.293918 | orchestrator | 00:01:21.293 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-14 00:01:21.293954 | orchestrator | 00:01:21.293 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.293980 | orchestrator | 00:01:21.293 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.294044 | orchestrator | 00:01:21.293 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-14 00:01:21.294057 | orchestrator | 00:01:21.294 STDOUT terraform:  } 2025-04-14 00:01:21.294068 | orchestrator | 00:01:21.294 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.294080 | orchestrator | 00:01:21.294 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-14 00:01:21.294091 | orchestrator | 00:01:21.294 STDOUT terraform:  } 2025-04-14 00:01:21.294116 | orchestrator | 00:01:21.294 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.294143 | orchestrator | 00:01:21.294 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-14 00:01:21.294155 | orchestrator | 00:01:21.294 STDOUT terraform:  } 2025-04-14 00:01:21.294166 | orchestrator | 00:01:21.294 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.294197 | orchestrator | 00:01:21.294 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-14 00:01:21.294209 | orchestrator | 00:01:21.294 STDOUT terraform:  } 2025-04-14 00:01:21.294234 | orchestrator | 00:01:21.294 STDOUT terraform:  + binding (known after apply) 2025-04-14 00:01:21.294265 | orchestrator | 00:01:21.294 STDOUT terraform:  + fixed_ip { 2025-04-14 00:01:21.294277 | orchestrator | 00:01:21.294 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-04-14 00:01:21.294288 | orchestrator | 00:01:21.294 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.294299 | orchestrator | 00:01:21.294 STDOUT terraform:  } 2025-04-14 00:01:21.294311 | orchestrator | 00:01:21.294 STDOUT terraform:  } 2025-04-14 00:01:21.294358 | orchestrator | 00:01:21.294 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-04-14 00:01:21.294402 | orchestrator | 00:01:21.294 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-14 00:01:21.294440 | orchestrator | 00:01:21.294 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.294477 | orchestrator | 00:01:21.294 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-14 00:01:21.294512 | orchestrator | 00:01:21.294 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-14 00:01:21.294548 | orchestrator | 00:01:21.294 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.294585 | orchestrator | 00:01:21.294 STDOUT terraform:  + device_id = (known after apply) 2025-04-14 00:01:21.294627 | orchestrator | 00:01:21.294 STDOUT terraform:  + device_owner = (known after apply) 2025-04-14 00:01:21.294664 | orchestrator | 00:01:21.294 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-14 00:01:21.294699 | orchestrator | 00:01:21.294 STDOUT terraform:  + dns_name = (known after apply) 2025-04-14 00:01:21.294739 | orchestrator | 00:01:21.294 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.294774 | orchestrator | 00:01:21.294 STDOUT terraform:  + mac_address = (known after apply) 2025-04-14 00:01:21.294809 | orchestrator | 00:01:21.294 STDOUT terraform:  + network_id = (known after apply) 2025-04-14 00:01:21.294845 | orchestrator | 00:01:21.294 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-14 00:01:21.294881 | orchestrator | 00:01:21.294 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-14 00:01:21.294922 | orchestrator | 00:01:21.294 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.294953 | orchestrator | 00:01:21.294 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-14 00:01:21.294989 | orchestrator | 00:01:21.294 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.295030 | orchestrator | 00:01:21.294 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.295060 | orchestrator | 00:01:21.295 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-14 00:01:21.295072 | orchestrator | 00:01:21.295 STDOUT terraform:  } 2025-04-14 00:01:21.295083 | orchestrator | 00:01:21.295 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.295116 | orchestrator | 00:01:21.295 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-14 00:01:21.295128 | orchestrator | 00:01:21.295 STDOUT terraform:  } 2025-04-14 00:01:21.295139 | orchestrator | 00:01:21.295 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.295169 | orchestrator | 00:01:21.295 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-14 00:01:21.295181 | orchestrator | 00:01:21.295 STDOUT terraform:  } 2025-04-14 00:01:21.295192 | orchestrator | 00:01:21.295 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.295224 | orchestrator | 00:01:21.295 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-14 00:01:21.295236 | orchestrator | 00:01:21.295 STDOUT terraform:  } 2025-04-14 00:01:21.295249 | orchestrator | 00:01:21.295 STDOUT terraform:  + binding (known after apply) 2025-04-14 00:01:21.295260 | orchestrator | 00:01:21.295 STDOUT terraform:  + fixed_ip { 2025-04-14 00:01:21.295293 | orchestrator | 00:01:21.295 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-04-14 00:01:21.295311 | orchestrator | 00:01:21.295 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.295322 | orchestrator | 00:01:21.295 STDOUT terraform:  } 2025-04-14 00:01:21.295333 | orchestrator | 00:01:21.295 STDOUT terraform:  } 2025-04-14 00:01:21.295383 | orchestrator | 00:01:21.295 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-04-14 00:01:21.295427 | orchestrator | 00:01:21.295 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-14 00:01:21.295463 | orchestrator | 00:01:21.295 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.295500 | orchestrator | 00:01:21.295 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-14 00:01:21.295536 | orchestrator | 00:01:21.295 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-14 00:01:21.295572 | orchestrator | 00:01:21.295 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.295608 | orchestrator | 00:01:21.295 STDOUT terraform:  + device_id = (known after apply) 2025-04-14 00:01:21.295644 | orchestrator | 00:01:21.295 STDOUT terraform:  + device_owner = (known after apply) 2025-04-14 00:01:21.295681 | orchestrator | 00:01:21.295 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-14 00:01:21.295716 | orchestrator | 00:01:21.295 STDOUT terraform:  + dns_name = (known after apply) 2025-04-14 00:01:21.295753 | orchestrator | 00:01:21.295 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.295790 | orchestrator | 00:01:21.295 STDOUT terraform:  + mac_address = (known after apply) 2025-04-14 00:01:21.295828 | orchestrator | 00:01:21.295 STDOUT terraform:  + network_id = (known after apply) 2025-04-14 00:01:21.295863 | orchestrator | 00:01:21.295 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-14 00:01:21.295898 | orchestrator | 00:01:21.295 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-14 00:01:21.295935 | orchestrator | 00:01:21.295 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.295971 | orchestrator | 00:01:21.295 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-14 00:01:21.296170 | orchestrator | 00:01:21.295 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.296258 | orchestrator | 00:01:21.296 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.296278 | orchestrator | 00:01:21.296 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-14 00:01:21.296294 | orchestrator | 00:01:21.296 STDOUT terraform:  } 2025-04-14 00:01:21.296309 | orchestrator | 00:01:21.296 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.296324 | orchestrator | 00:01:21.296 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-14 00:01:21.296338 | orchestrator | 00:01:21.296 STDOUT terraform:  } 2025-04-14 00:01:21.296369 | orchestrator | 00:01:21.296 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.296389 | orchestrator | 00:01:21.296 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-14 00:01:21.296405 | orchestrator | 00:01:21.296 STDOUT terraform:  } 2025-04-14 00:01:21.296438 | orchestrator | 00:01:21.296 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.296453 | orchestrator | 00:01:21.296 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-14 00:01:21.296467 | orchestrator | 00:01:21.296 STDOUT terraform:  } 2025-04-14 00:01:21.296482 | orchestrator | 00:01:21.296 STDOUT terraform:  + binding (known after apply) 2025-04-14 00:01:21.296496 | orchestrator | 00:01:21.296 STDOUT terraform:  + fixed_ip { 2025-04-14 00:01:21.296511 | orchestrator | 00:01:21.296 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-04-14 00:01:21.296525 | orchestrator | 00:01:21.296 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.296540 | orchestrator | 00:01:21.296 STDOUT terraform:  } 2025-04-14 00:01:21.296554 | orchestrator | 00:01:21.296 STDOUT terraform:  } 2025-04-14 00:01:21.296568 | orchestrator | 00:01:21.296 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-04-14 00:01:21.296588 | orchestrator | 00:01:21.296 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-14 00:01:21.296625 | orchestrator | 00:01:21.296 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.296652 | orchestrator | 00:01:21.296 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-14 00:01:21.296667 | orchestrator | 00:01:21.296 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-14 00:01:21.296682 | orchestrator | 00:01:21.296 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.296696 | orchestrator | 00:01:21.296 STDOUT terraform:  + device_id = (known after apply) 2025-04-14 00:01:21.296715 | orchestrator | 00:01:21.296 STDOUT terraform:  + device_owner = (known after apply) 2025-04-14 00:01:21.296730 | orchestrator | 00:01:21.296 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-14 00:01:21.296750 | orchestrator | 00:01:21.296 STDOUT terraform:  + dns_name = (known after apply) 2025-04-14 00:01:21.296765 | orchestrator | 00:01:21.296 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.296783 | orchestrator | 00:01:21.296 STDOUT terraform:  + mac_address = (known after apply) 2025-04-14 00:01:21.296822 | orchestrator | 00:01:21.296 STDOUT terraform:  + network_id = (known after apply) 2025-04-14 00:01:21.296837 | orchestrator | 00:01:21.296 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-14 00:01:21.296856 | orchestrator | 00:01:21.296 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-14 00:01:21.296895 | orchestrator | 00:01:21.296 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.296914 | orchestrator | 00:01:21.296 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-14 00:01:21.296952 | orchestrator | 00:01:21.296 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.296971 | orchestrator | 00:01:21.296 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.296987 | orchestrator | 00:01:21.296 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-14 00:01:21.297039 | orchestrator | 00:01:21.296 STDOUT terraform:  } 2025-04-14 00:01:21.297075 | orchestrator | 00:01:21.296 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.297091 | orchestrator | 00:01:21.296 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-14 00:01:21.297106 | orchestrator | 00:01:21.297 STDOUT terraform:  } 2025-04-14 00:01:21.297120 | orchestrator | 00:01:21.297 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.297140 | orchestrator | 00:01:21.297 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-14 00:01:21.297169 | orchestrator | 00:01:21.297 STDOUT terraform:  } 2025-04-14 00:01:21.297185 | orchestrator | 00:01:21.297 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.297199 | orchestrator | 00:01:21.297 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-14 00:01:21.297214 | orchestrator | 00:01:21.297 STDOUT terraform:  } 2025-04-14 00:01:21.297233 | orchestrator | 00:01:21.297 STDOUT terraform:  + binding (known after apply) 2025-04-14 00:01:21.297248 | orchestrator | 00:01:21.297 STDOUT terraform:  + fixed_ip { 2025-04-14 00:01:21.297262 | orchestrator | 00:01:21.297 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-04-14 00:01:21.297277 | orchestrator | 00:01:21.297 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.297291 | orchestrator | 00:01:21.297 STDOUT terraform:  } 2025-04-14 00:01:21.297309 | orchestrator | 00:01:21.297 STDOUT terraform:  } 2025-04-14 00:01:21.297325 | orchestrator | 00:01:21.297 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-04-14 00:01:21.297343 | orchestrator | 00:01:21.297 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-04-14 00:01:21.297361 | orchestrator | 00:01:21.297 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.297408 | orchestrator | 00:01:21.297 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-04-14 00:01:21.297427 | orchestrator | 00:01:21.297 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-04-14 00:01:21.297471 | orchestrator | 00:01:21.297 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.297490 | orchestrator | 00:01:21.297 STDOUT terraform:  + device_id = (known after apply) 2025-04-14 00:01:21.297539 | orchestrator | 00:01:21.297 STDOUT terraform:  + device_owner = (known after apply) 2025-04-14 00:01:21.297558 | orchestrator | 00:01:21.297 STDOUT terraform:  + dns_assignment = (known after apply) 2025-04-14 00:01:21.297606 | orchestrator | 00:01:21.297 STDOUT terraform:  + dns_name = (known after apply) 2025-04-14 00:01:21.297634 | orchestrator | 00:01:21.297 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.297679 | orchestrator | 00:01:21.297 STDOUT terraform:  + mac_address = (known after apply) 2025-04-14 00:01:21.297720 | orchestrator | 00:01:21.297 STDOUT terraform:  + network_id = (known after apply) 2025-04-14 00:01:21.297739 | orchestrator | 00:01:21.297 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-04-14 00:01:21.297783 | orchestrator | 00:01:21.297 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-04-14 00:01:21.297809 | orchestrator | 00:01:21.297 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.297848 | orchestrator | 00:01:21.297 STDOUT terraform:  + security_group_ids = (known after apply) 2025-04-14 00:01:21.297889 | orchestrator | 00:01:21.297 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.297908 | orchestrator | 00:01:21.297 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.297932 | orchestrator | 00:01:21.297 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-04-14 00:01:21.297948 | orchestrator | 00:01:21.297 STDOUT terraform:  } 2025-04-14 00:01:21.297966 | orchestrator | 00:01:21.297 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.297981 | orchestrator | 00:01:21.297 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-04-14 00:01:21.298078 | orchestrator | 00:01:21.297 STDOUT terraform:  } 2025-04-14 00:01:21.298100 | orchestrator | 00:01:21.297 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.298128 | orchestrator | 00:01:21.297 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-04-14 00:01:21.298143 | orchestrator | 00:01:21.298 STDOUT terraform:  } 2025-04-14 00:01:21.298162 | orchestrator | 00:01:21.298 STDOUT terraform:  + allowed_address_pairs { 2025-04-14 00:01:21.298176 | orchestrator | 00:01:21.298 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-04-14 00:01:21.298191 | orchestrator | 00:01:21.298 STDOUT terraform:  } 2025-04-14 00:01:21.298206 | orchestrator | 00:01:21.298 STDOUT terraform:  + binding (known after apply) 2025-04-14 00:01:21.298220 | orchestrator | 00:01:21.298 STDOUT terraform:  + fixed_ip { 2025-04-14 00:01:21.298238 | orchestrator | 00:01:21.298 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-04-14 00:01:21.298258 | orchestrator | 00:01:21.298 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.298273 | orchestrator | 00:01:21.298 STDOUT terraform:  } 2025-04-14 00:01:21.298288 | orchestrator | 00:01:21.298 STDOUT terraform:  } 2025-04-14 00:01:21.298306 | orchestrator | 00:01:21.298 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-04-14 00:01:21.298326 | orchestrator | 00:01:21.298 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-04-14 00:01:21.298344 | orchestrator | 00:01:21.298 STDOUT terraform:  + force_destroy = false 2025-04-14 00:01:21.298385 | orchestrator | 00:01:21.298 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.298405 | orchestrator | 00:01:21.298 STDOUT terraform:  + port_id = (known after apply) 2025-04-14 00:01:21.298420 | orchestrator | 00:01:21.298 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.298438 | orchestrator | 00:01:21.298 STDOUT terraform:  + router_id = (known after apply) 2025-04-14 00:01:21.298456 | orchestrator | 00:01:21.298 STDOUT terraform:  + subnet_id = (known after apply) 2025-04-14 00:01:21.298501 | orchestrator | 00:01:21.298 STDOUT terraform:  } 2025-04-14 00:01:21.298520 | orchestrator | 00:01:21.298 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-04-14 00:01:21.298538 | orchestrator | 00:01:21.298 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-04-14 00:01:21.298572 | orchestrator | 00:01:21.298 STDOUT terraform:  + admin_state_up = (known after apply) 2025-04-14 00:01:21.298590 | orchestrator | 00:01:21.298 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.298608 | orchestrator | 00:01:21.298 STDOUT terraform:  + availability_zone_hints = [ 2025-04-14 00:01:21.298626 | orchestrator | 00:01:21.298 STDOUT terraform:  + "nova", 2025-04-14 00:01:21.298673 | orchestrator | 00:01:21.298 STDOUT terraform:  ] 2025-04-14 00:01:21.298693 | orchestrator | 00:01:21.298 STDOUT terraform:  + distributed = (known after apply) 2025-04-14 00:01:21.298711 | orchestrator | 00:01:21.298 STDOUT terraform:  + enable_snat = (known after apply) 2025-04-14 00:01:21.298758 | orchestrator | 00:01:21.298 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-04-14 00:01:21.298801 | orchestrator | 00:01:21.298 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.298820 | orchestrator | 00:01:21.298 STDOUT terraform:  + name = "testbed" 2025-04-14 00:01:21.298859 | orchestrator | 00:01:21.298 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.298901 | orchestrator | 00:01:21.298 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.298920 | orchestrator | 00:01:21.298 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-04-14 00:01:21.298981 | orchestrator | 00:01:21.298 STDOUT terraform:  } 2025-04-14 00:01:21.299061 | orchestrator | 00:01:21.298 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-04-14 00:01:21.299088 | orchestrator | 00:01:21.298 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-04-14 00:01:21.299103 | orchestrator | 00:01:21.299 STDOUT terraform:  + description = "ssh" 2025-04-14 00:01:21.299122 | orchestrator | 00:01:21.299 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.299137 | orchestrator | 00:01:21.299 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.299152 | orchestrator | 00:01:21.299 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.299170 | orchestrator | 00:01:21.299 STDOUT terraform:  + port_range_max = 22 2025-04-14 00:01:21.299185 | orchestrator | 00:01:21.299 STDOUT terraform:  + port_range_min = 22 2025-04-14 00:01:21.299200 | orchestrator | 00:01:21.299 STDOUT terraform:  + protocol = "tcp" 2025-04-14 00:01:21.299218 | orchestrator | 00:01:21.299 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.299233 | orchestrator | 00:01:21.299 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.299250 | orchestrator | 00:01:21.299 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-14 00:01:21.299268 | orchestrator | 00:01:21.299 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.299285 | orchestrator | 00:01:21.299 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.299302 | orchestrator | 00:01:21.299 STDOUT terraform:  } 2025-04-14 00:01:21.299440 | orchestrator | 00:01:21.299 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-04-14 00:01:21.299478 | orchestrator | 00:01:21.299 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-04-14 00:01:21.299491 | orchestrator | 00:01:21.299 STDOUT terraform:  + description = "wireguard" 2025-04-14 00:01:21.299505 | orchestrator | 00:01:21.299 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.299515 | orchestrator | 00:01:21.299 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.299524 | orchestrator | 00:01:21.299 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.299532 | orchestrator | 00:01:21.299 STDOUT terraform:  + port_range_max = 51820 2025-04-14 00:01:21.299543 | orchestrator | 00:01:21.299 STDOUT terraform:  + port_range_min = 51820 2025-04-14 00:01:21.299570 | orchestrator | 00:01:21.299 STDOUT terraform:  + protocol = "udp" 2025-04-14 00:01:21.299581 | orchestrator | 00:01:21.299 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.299592 | orchestrator | 00:01:21.299 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.299620 | orchestrator | 00:01:21.299 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-14 00:01:21.299650 | orchestrator | 00:01:21.299 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.299671 | orchestrator | 00:01:21.299 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.299682 | orchestrator | 00:01:21.299 STDOUT terraform:  } 2025-04-14 00:01:21.299741 | orchestrator | 00:01:21.299 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-04-14 00:01:21.299793 | orchestrator | 00:01:21.299 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-04-14 00:01:21.299817 | orchestrator | 00:01:21.299 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.299829 | orchestrator | 00:01:21.299 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.299868 | orchestrator | 00:01:21.299 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.299879 | orchestrator | 00:01:21.299 STDOUT terraform:  + protocol = "tcp" 2025-04-14 00:01:21.299915 | orchestrator | 00:01:21.299 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.299945 | orchestrator | 00:01:21.299 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.299975 | orchestrator | 00:01:21.299 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-14 00:01:21.300022 | orchestrator | 00:01:21.299 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.300054 | orchestrator | 00:01:21.300 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.300065 | orchestrator | 00:01:21.300 STDOUT terraform:  } 2025-04-14 00:01:21.300118 | orchestrator | 00:01:21.300 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-04-14 00:01:21.300170 | orchestrator | 00:01:21.300 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-04-14 00:01:21.300187 | orchestrator | 00:01:21.300 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.300211 | orchestrator | 00:01:21.300 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.300243 | orchestrator | 00:01:21.300 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.300254 | orchestrator | 00:01:21.300 STDOUT terraform:  + protocol = "udp" 2025-04-14 00:01:21.300291 | orchestrator | 00:01:21.300 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.300320 | orchestrator | 00:01:21.300 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.300353 | orchestrator | 00:01:21.300 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-04-14 00:01:21.300381 | orchestrator | 00:01:21.300 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.300412 | orchestrator | 00:01:21.300 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.300424 | orchestrator | 00:01:21.300 STDOUT terraform:  } 2025-04-14 00:01:21.300476 | orchestrator | 00:01:21.300 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-04-14 00:01:21.300529 | orchestrator | 00:01:21.300 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-04-14 00:01:21.300545 | orchestrator | 00:01:21.300 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.300570 | orchestrator | 00:01:21.300 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.300602 | orchestrator | 00:01:21.300 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.300614 | orchestrator | 00:01:21.300 STDOUT terraform:  + protocol = "icmp" 2025-04-14 00:01:21.300649 | orchestrator | 00:01:21.300 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.300680 | orchestrator | 00:01:21.300 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.300706 | orchestrator | 00:01:21.300 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-14 00:01:21.300736 | orchestrator | 00:01:21.300 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.300771 | orchestrator | 00:01:21.300 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.300827 | orchestrator | 00:01:21.300 STDOUT terraform:  } 2025-04-14 00:01:21.300838 | orchestrator | 00:01:21.300 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-04-14 00:01:21.300881 | orchestrator | 00:01:21.300 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-04-14 00:01:21.300907 | orchestrator | 00:01:21.300 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.300918 | orchestrator | 00:01:21.300 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.300953 | orchestrator | 00:01:21.300 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.300964 | orchestrator | 00:01:21.300 STDOUT terraform:  + protocol = "tcp" 2025-04-14 00:01:21.301012 | orchestrator | 00:01:21.300 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.301042 | orchestrator | 00:01:21.301 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.301058 | orchestrator | 00:01:21.301 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-14 00:01:21.301093 | orchestrator | 00:01:21.301 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.301125 | orchestrator | 00:01:21.301 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.301181 | orchestrator | 00:01:21.301 STDOUT terraform:  } 2025-04-14 00:01:21.301192 | orchestrator | 00:01:21.301 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-04-14 00:01:21.301236 | orchestrator | 00:01:21.301 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-04-14 00:01:21.301261 | orchestrator | 00:01:21.301 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.301272 | orchestrator | 00:01:21.301 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.301308 | orchestrator | 00:01:21.301 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.301319 | orchestrator | 00:01:21.301 STDOUT terraform:  + protocol = "udp" 2025-04-14 00:01:21.301355 | orchestrator | 00:01:21.301 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.301385 | orchestrator | 00:01:21.301 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.301410 | orchestrator | 00:01:21.301 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-14 00:01:21.301440 | orchestrator | 00:01:21.301 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.301471 | orchestrator | 00:01:21.301 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.301482 | orchestrator | 00:01:21.301 STDOUT terraform:  } 2025-04-14 00:01:21.301532 | orchestrator | 00:01:21.301 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-04-14 00:01:21.301583 | orchestrator | 00:01:21.301 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-04-14 00:01:21.301608 | orchestrator | 00:01:21.301 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.301620 | orchestrator | 00:01:21.301 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.301655 | orchestrator | 00:01:21.301 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.301667 | orchestrator | 00:01:21.301 STDOUT terraform:  + protocol = "icmp" 2025-04-14 00:01:21.301703 | orchestrator | 00:01:21.301 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.301734 | orchestrator | 00:01:21.301 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.301758 | orchestrator | 00:01:21.301 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-14 00:01:21.301789 | orchestrator | 00:01:21.301 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.301820 | orchestrator | 00:01:21.301 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.301832 | orchestrator | 00:01:21.301 STDOUT terraform:  } 2025-04-14 00:01:21.301879 | orchestrator | 00:01:21.301 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-04-14 00:01:21.301930 | orchestrator | 00:01:21.301 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-04-14 00:01:21.301942 | orchestrator | 00:01:21.301 STDOUT terraform:  + description = "vrrp" 2025-04-14 00:01:21.301971 | orchestrator | 00:01:21.301 STDOUT terraform:  + direction = "ingress" 2025-04-14 00:01:21.301982 | orchestrator | 00:01:21.301 STDOUT terraform:  + ethertype = "IPv4" 2025-04-14 00:01:21.302046 | orchestrator | 00:01:21.301 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.302060 | orchestrator | 00:01:21.302 STDOUT terraform:  + protocol = "112" 2025-04-14 00:01:21.302102 | orchestrator | 00:01:21.302 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.302118 | orchestrator | 00:01:21.302 STDOUT terraform:  + remote_group_id = (known after apply) 2025-04-14 00:01:21.302149 | orchestrator | 00:01:21.302 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-04-14 00:01:21.302187 | orchestrator | 00:01:21.302 STDOUT terraform:  + security_group_id = (known after apply) 2025-04-14 00:01:21.302198 | orchestrator | 00:01:21.302 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.302208 | orchestrator | 00:01:21.302 STDOUT terraform:  } 2025-04-14 00:01:21.302266 | orchestrator | 00:01:21.302 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-04-14 00:01:21.302314 | orchestrator | 00:01:21.302 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-04-14 00:01:21.302326 | orchestrator | 00:01:21.302 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.302373 | orchestrator | 00:01:21.302 STDOUT terraform:  + description = "management security group" 2025-04-14 00:01:21.302385 | orchestrator | 00:01:21.302 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.302428 | orchestrator | 00:01:21.302 STDOUT terraform:  + name = "testbed-management" 2025-04-14 00:01:21.302439 | orchestrator | 00:01:21.302 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.302479 | orchestrator | 00:01:21.302 STDOUT terraform:  + stateful = (known after apply) 2025-04-14 00:01:21.302510 | orchestrator | 00:01:21.302 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.302565 | orchestrator | 00:01:21.302 STDOUT terraform:  } 2025-04-14 00:01:21.302580 | orchestrator | 00:01:21.302 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-04-14 00:01:21.302637 | orchestrator | 00:01:21.302 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-04-14 00:01:21.302668 | orchestrator | 00:01:21.302 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.302679 | orchestrator | 00:01:21.302 STDOUT terraform:  + description = "node security group" 2025-04-14 00:01:21.302720 | orchestrator | 00:01:21.302 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.302732 | orchestrator | 00:01:21.302 STDOUT terraform:  + name = "testbed-node" 2025-04-14 00:01:21.302769 | orchestrator | 00:01:21.302 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.302780 | orchestrator | 00:01:21.302 STDOUT terraform:  + stateful = (known after apply) 2025-04-14 00:01:21.302819 | orchestrator | 00:01:21.302 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.302871 | orchestrator | 00:01:21.302 STDOUT terraform:  } 2025-04-14 00:01:21.302883 | orchestrator | 00:01:21.302 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-04-14 00:01:21.302913 | orchestrator | 00:01:21.302 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-04-14 00:01:21.302951 | orchestrator | 00:01:21.302 STDOUT terraform:  + all_tags = (known after apply) 2025-04-14 00:01:21.302962 | orchestrator | 00:01:21.302 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-04-14 00:01:21.303019 | orchestrator | 00:01:21.302 STDOUT terraform:  + dns_nameservers = [ 2025-04-14 00:01:21.303031 | orchestrator | 00:01:21.302 STDOUT terraform:  + "8.8.8.8", 2025-04-14 00:01:21.303041 | orchestrator | 00:01:21.302 STDOUT terraform:  + "9.9.9.9", 2025-04-14 00:01:21.303050 | orchestrator | 00:01:21.303 STDOUT terraform:  ] 2025-04-14 00:01:21.303060 | orchestrator | 00:01:21.303 STDOUT terraform:  + enable_dhcp = true 2025-04-14 00:01:21.303099 | orchestrator | 00:01:21.303 STDOUT terraform:  + gateway_ip = (known after apply) 2025-04-14 00:01:21.303111 | orchestrator | 00:01:21.303 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.303121 | orchestrator | 00:01:21.303 STDOUT terraform:  + ip_version = 4 2025-04-14 00:01:21.303264 | orchestrator | 00:01:21.303 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-04-14 00:01:21.303316 | orchestrator | 00:01:21.303 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-04-14 00:01:21.303337 | orchestrator | 00:01:21.303 STDOUT terraform:  + name = "subnet-testbed-management" 2025-04-14 00:01:21.303356 | orchestrator | 00:01:21.303 STDOUT terraform:  + network_id = (known after apply) 2025-04-14 00:01:21.303378 | orchestrator | 00:01:21.303 STDOUT terraform:  + no_gateway = false 2025-04-14 00:01:21.303396 | orchestrator | 00:01:21.303 STDOUT terraform:  + region = (known after apply) 2025-04-14 00:01:21.303410 | orchestrator | 00:01:21.303 STDOUT terraform:  + service_types = (known after apply) 2025-04-14 00:01:21.303424 | orchestrator | 00:01:21.303 STDOUT terraform:  + tenant_id = (known after apply) 2025-04-14 00:01:21.303439 | orchestrator | 00:01:21.303 STDOUT terraform:  + allocation_pool { 2025-04-14 00:01:21.303454 | orchestrator | 00:01:21.303 STDOUT terraform:  + end = "192.168.31.250" 2025-04-14 00:01:21.303472 | orchestrator | 00:01:21.303 STDOUT terraform:  + start = "192.168.31.200" 2025-04-14 00:01:21.303488 | orchestrator | 00:01:21.303 STDOUT terraform:  } 2025-04-14 00:01:21.303503 | orchestrator | 00:01:21.303 STDOUT terraform:  } 2025-04-14 00:01:21.303518 | orchestrator | 00:01:21.303 STDOUT terraform:  # terraform_data.image will be created 2025-04-14 00:01:21.303532 | orchestrator | 00:01:21.303 STDOUT terraform:  + resource "terraform_data" "image" { 2025-04-14 00:01:21.303547 | orchestrator | 00:01:21.303 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.303565 | orchestrator | 00:01:21.303 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-14 00:01:21.303594 | orchestrator | 00:01:21.303 STDOUT terraform:  + output = (known after apply) 2025-04-14 00:01:21.303609 | orchestrator | 00:01:21.303 STDOUT terraform:  } 2025-04-14 00:01:21.303624 | orchestrator | 00:01:21.303 STDOUT terraform:  # terraform_data.image_node will be created 2025-04-14 00:01:21.303638 | orchestrator | 00:01:21.303 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-04-14 00:01:21.303656 | orchestrator | 00:01:21.303 STDOUT terraform:  + id = (known after apply) 2025-04-14 00:01:21.303671 | orchestrator | 00:01:21.303 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-04-14 00:01:21.303685 | orchestrator | 00:01:21.303 STDOUT terraform:  + output = (known after apply) 2025-04-14 00:01:21.303699 | orchestrator | 00:01:21.303 STDOUT terraform:  } 2025-04-14 00:01:21.303714 | orchestrator | 00:01:21.303 STDOUT terraform: Plan: 82 to add, 0 to change, 0 to destroy. 2025-04-14 00:01:21.303731 | orchestrator | 00:01:21.303 STDOUT terraform: Changes to Outputs: 2025-04-14 00:01:21.534419 | orchestrator | 00:01:21.303 STDOUT terraform:  + manager_address = (sensitive value) 2025-04-14 00:01:21.534491 | orchestrator | 00:01:21.303 STDOUT terraform:  + private_key = (sensitive value) 2025-04-14 00:01:21.534511 | orchestrator | 00:01:21.534 STDOUT terraform: terraform_data.image: Creating... 2025-04-14 00:01:21.535123 | orchestrator | 00:01:21.534 STDOUT terraform: terraform_data.image_node: Creating... 2025-04-14 00:01:21.535159 | orchestrator | 00:01:21.534 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=0b6f3e4d-edf7-8d42-f474-8d77e8f43cff] 2025-04-14 00:01:21.535174 | orchestrator | 00:01:21.535 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=05d1fcc1-3862-733b-5524-29933b224b4b] 2025-04-14 00:01:21.555755 | orchestrator | 00:01:21.555 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-04-14 00:01:21.557085 | orchestrator | 00:01:21.555 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-04-14 00:01:21.557147 | orchestrator | 00:01:21.556 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-04-14 00:01:21.557574 | orchestrator | 00:01:21.557 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-04-14 00:01:21.558764 | orchestrator | 00:01:21.558 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-04-14 00:01:21.563530 | orchestrator | 00:01:21.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-04-14 00:01:21.563963 | orchestrator | 00:01:21.563 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-04-14 00:01:21.564163 | orchestrator | 00:01:21.564 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-04-14 00:01:21.564466 | orchestrator | 00:01:21.564 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creating... 2025-04-14 00:01:21.569743 | orchestrator | 00:01:21.569 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-04-14 00:01:22.026545 | orchestrator | 00:01:22.026 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-14 00:01:22.033152 | orchestrator | 00:01:22.032 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-04-14 00:01:22.251133 | orchestrator | 00:01:22.250 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-04-14 00:01:22.259418 | orchestrator | 00:01:22.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creating... 2025-04-14 00:01:22.301802 | orchestrator | 00:01:22.301 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-04-14 00:01:22.309839 | orchestrator | 00:01:22.309 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-04-14 00:01:27.361199 | orchestrator | 00:01:27.360 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 5s [id=bfc9382b-63b6-45a8-8c11-1677269fc2db] 2025-04-14 00:01:27.367322 | orchestrator | 00:01:27.367 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creating... 2025-04-14 00:01:31.558291 | orchestrator | 00:01:31.557 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-04-14 00:01:31.565250 | orchestrator | 00:01:31.565 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-04-14 00:01:31.565417 | orchestrator | 00:01:31.565 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-04-14 00:01:31.565581 | orchestrator | 00:01:31.565 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Still creating... [10s elapsed] 2025-04-14 00:01:31.565735 | orchestrator | 00:01:31.565 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-04-14 00:01:31.570768 | orchestrator | 00:01:31.570 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-04-14 00:01:32.033748 | orchestrator | 00:01:32.033 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-04-14 00:01:32.136803 | orchestrator | 00:01:32.136 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=64225693-fc38-404b-a874-78411dc3466d] 2025-04-14 00:01:32.142926 | orchestrator | 00:01:32.142 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=03a3c0ae-ae5b-4103-947a-830f0553055f] 2025-04-14 00:01:32.145476 | orchestrator | 00:01:32.145 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creating... 2025-04-14 00:01:32.153468 | orchestrator | 00:01:32.153 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creating... 2025-04-14 00:01:32.189602 | orchestrator | 00:01:32.189 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=0623da07-2b86-4b0f-8ae6-479bebb1d3d2] 2025-04-14 00:01:32.194880 | orchestrator | 00:01:32.194 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[12]: Creation complete after 10s [id=61d8c1b1-8af8-4257-810b-e0715f81f0ca] 2025-04-14 00:01:32.196084 | orchestrator | 00:01:32.195 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=f216f5ad-8b9f-40bf-b892-25305f930110] 2025-04-14 00:01:32.199452 | orchestrator | 00:01:32.199 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creating... 2025-04-14 00:01:32.204846 | orchestrator | 00:01:32.204 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-04-14 00:01:32.206886 | orchestrator | 00:01:32.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creating... 2025-04-14 00:01:32.219561 | orchestrator | 00:01:32.219 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=1d452a86-d7ed-4b7e-a6e2-8adfa0173156] 2025-04-14 00:01:32.228264 | orchestrator | 00:01:32.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creating... 2025-04-14 00:01:32.257122 | orchestrator | 00:01:32.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=ec5b891c-a93a-4443-952c-376a64ed5153] 2025-04-14 00:01:32.260107 | orchestrator | 00:01:32.259 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Still creating... [10s elapsed] 2025-04-14 00:01:32.264558 | orchestrator | 00:01:32.264 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-04-14 00:01:32.310434 | orchestrator | 00:01:32.310 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-04-14 00:01:32.436713 | orchestrator | 00:01:32.436 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[11]: Creation complete after 10s [id=d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3] 2025-04-14 00:01:32.444637 | orchestrator | 00:01:32.444 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creating... 2025-04-14 00:01:32.502215 | orchestrator | 00:01:32.501 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=ff496d9e-d724-4be6-b701-ae323f1b3d4d] 2025-04-14 00:01:32.512627 | orchestrator | 00:01:32.512 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-04-14 00:01:37.370453 | orchestrator | 00:01:37.369 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Still creating... [10s elapsed] 2025-04-14 00:01:37.534135 | orchestrator | 00:01:37.533 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[13]: Creation complete after 11s [id=f4895685-066a-4248-b20c-4cd40b9ff210] 2025-04-14 00:01:37.541209 | orchestrator | 00:01:37.540 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-04-14 00:01:42.146725 | orchestrator | 00:01:42.146 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Still creating... [10s elapsed] 2025-04-14 00:01:42.154076 | orchestrator | 00:01:42.153 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Still creating... [10s elapsed] 2025-04-14 00:01:42.200523 | orchestrator | 00:01:42.200 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Still creating... [10s elapsed] 2025-04-14 00:01:42.206805 | orchestrator | 00:01:42.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-04-14 00:01:42.207886 | orchestrator | 00:01:42.207 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Still creating... [10s elapsed] 2025-04-14 00:01:42.229279 | orchestrator | 00:01:42.228 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Still creating... [10s elapsed] 2025-04-14 00:01:42.265722 | orchestrator | 00:01:42.265 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-04-14 00:01:42.317432 | orchestrator | 00:01:42.317 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[16]: Creation complete after 10s [id=bda45bef-0c7e-4642-a586-327a75973f57] 2025-04-14 00:01:42.338611 | orchestrator | 00:01:42.338 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[15]: Creation complete after 10s [id=938a8574-ab31-4693-953b-ad06db98cc0e] 2025-04-14 00:01:42.342238 | orchestrator | 00:01:42.341 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-04-14 00:01:42.351703 | orchestrator | 00:01:42.351 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-04-14 00:01:42.357480 | orchestrator | 00:01:42.357 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=192f4aa30c05bc40993a89f0acfbd6814f9bb70f] 2025-04-14 00:01:42.367070 | orchestrator | 00:01:42.366 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-04-14 00:01:42.377040 | orchestrator | 00:01:42.376 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[9]: Creation complete after 10s [id=c26cfb84-2784-4068-ac39-279abdffc82e] 2025-04-14 00:01:42.381379 | orchestrator | 00:01:42.381 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-04-14 00:01:42.425051 | orchestrator | 00:01:42.424 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[10]: Creation complete after 10s [id=676c1686-7068-4aa0-a437-1ca2ad657cc9] 2025-04-14 00:01:42.434554 | orchestrator | 00:01:42.434 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=4c093f95-6486-49b6-be92-05fa28509200] 2025-04-14 00:01:42.435145 | orchestrator | 00:01:42.434 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-04-14 00:01:42.446190 | orchestrator | 00:01:42.445 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Still creating... [10s elapsed] 2025-04-14 00:01:42.448145 | orchestrator | 00:01:42.448 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-04-14 00:01:42.454075 | orchestrator | 00:01:42.453 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=9d9d9fa7673d24e1fd0e16e830696dff03279d75] 2025-04-14 00:01:42.457290 | orchestrator | 00:01:42.457 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[17]: Creation complete after 10s [id=4f96d1f1-65aa-443a-b2b5-a30371495496] 2025-04-14 00:01:42.460353 | orchestrator | 00:01:42.460 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-04-14 00:01:42.462468 | orchestrator | 00:01:42.462 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-04-14 00:01:42.476542 | orchestrator | 00:01:42.476 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=318a826d-e453-41a1-9cbe-aee990c4d38b] 2025-04-14 00:01:42.513354 | orchestrator | 00:01:42.513 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-04-14 00:01:42.623177 | orchestrator | 00:01:42.622 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[14]: Creation complete after 11s [id=fc318d73-efa9-4c13-b4ab-953b52f9b4b0] 2025-04-14 00:01:42.847849 | orchestrator | 00:01:42.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=513c088e-3162-41df-b822-52bd96b6413e] 2025-04-14 00:01:47.542599 | orchestrator | 00:01:47.542 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-04-14 00:01:47.862579 | orchestrator | 00:01:47.862 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=96ad7b7c-0c39-408f-b5ea-89bdf3128e12] 2025-04-14 00:01:48.406063 | orchestrator | 00:01:48.405 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=a3ddc071-d061-4526-962b-4acefc4cb3f3] 2025-04-14 00:01:48.415710 | orchestrator | 00:01:48.415 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-04-14 00:01:52.344114 | orchestrator | 00:01:52.343 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-04-14 00:01:52.367644 | orchestrator | 00:01:52.367 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-04-14 00:01:52.382091 | orchestrator | 00:01:52.381 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-04-14 00:01:52.436648 | orchestrator | 00:01:52.436 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-04-14 00:01:52.461496 | orchestrator | 00:01:52.461 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-04-14 00:01:52.709421 | orchestrator | 00:01:52.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=cc2b2766-94e1-4878-a1a5-413ffcf6433c] 2025-04-14 00:01:52.710635 | orchestrator | 00:01:52.710 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=3869222f-65df-4a19-aa83-a02710b9e82d] 2025-04-14 00:01:52.725424 | orchestrator | 00:01:52.725 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=ea6d87d8-8d23-4a2a-943a-5d6f418db5cf] 2025-04-14 00:01:52.768798 | orchestrator | 00:01:52.768 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=d052f429-2014-4477-b3ba-20099dd124f2] 2025-04-14 00:01:52.804720 | orchestrator | 00:01:52.804 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=00d3bb8a-d17e-4e3a-a7c0-1a5acdf4d331] 2025-04-14 00:01:55.035268 | orchestrator | 00:01:55.035 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=553004cf-b860-4ed7-bf31-4b0606c3787a] 2025-04-14 00:01:55.040301 | orchestrator | 00:01:55.040 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-04-14 00:01:55.041687 | orchestrator | 00:01:55.041 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-04-14 00:01:55.042055 | orchestrator | 00:01:55.041 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-04-14 00:01:55.162117 | orchestrator | 00:01:55.161 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=1590cbbf-d38e-4f4f-aacc-d90391ddab43] 2025-04-14 00:01:55.174179 | orchestrator | 00:01:55.173 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-04-14 00:01:55.175251 | orchestrator | 00:01:55.175 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-04-14 00:01:55.175446 | orchestrator | 00:01:55.175 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-04-14 00:01:55.175778 | orchestrator | 00:01:55.175 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-04-14 00:01:55.180550 | orchestrator | 00:01:55.180 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-04-14 00:01:55.187518 | orchestrator | 00:01:55.187 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-04-14 00:01:55.189873 | orchestrator | 00:01:55.189 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-04-14 00:01:55.191105 | orchestrator | 00:01:55.190 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-04-14 00:01:55.251852 | orchestrator | 00:01:55.251 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=638efcf6-94a8-40e3-ab05-c627b34a09f9] 2025-04-14 00:01:55.269757 | orchestrator | 00:01:55.269 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-04-14 00:01:55.478406 | orchestrator | 00:01:55.477 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=d4bfe6d0-8c51-493a-9e22-f5fa2298fd9f] 2025-04-14 00:01:55.494188 | orchestrator | 00:01:55.493 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-04-14 00:01:55.643057 | orchestrator | 00:01:55.642 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=aed6baee-254f-40d1-b6f5-dee1ecb3c6aa] 2025-04-14 00:01:55.650459 | orchestrator | 00:01:55.650 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-04-14 00:01:55.756465 | orchestrator | 00:01:55.755 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=f3786880-3567-4b3c-8927-03b247890f7b] 2025-04-14 00:01:55.763910 | orchestrator | 00:01:55.763 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-04-14 00:01:55.832331 | orchestrator | 00:01:55.831 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=a1f30d7e-d37e-44f1-8539-4dd1039f2d71] 2025-04-14 00:01:55.842116 | orchestrator | 00:01:55.841 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-04-14 00:01:55.934442 | orchestrator | 00:01:55.934 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=3e3a1175-9d25-4cd9-a646-5fb3fa376f77] 2025-04-14 00:01:55.941718 | orchestrator | 00:01:55.941 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-04-14 00:01:55.953035 | orchestrator | 00:01:55.952 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=78221eb5-aa17-40d8-8085-8672f49c42ad] 2025-04-14 00:01:55.960088 | orchestrator | 00:01:55.959 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-04-14 00:01:56.089730 | orchestrator | 00:01:56.089 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=e8bb7f5e-98ab-4b9a-9be6-4514c44ccd39] 2025-04-14 00:01:56.101352 | orchestrator | 00:01:56.101 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-04-14 00:01:56.248782 | orchestrator | 00:01:56.248 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=4a80de5a-1922-479f-8ce0-5022efa16ba4] 2025-04-14 00:01:56.356104 | orchestrator | 00:01:56.355 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=7a861a2d-673a-43ca-8272-2a499a54cba5] 2025-04-14 00:02:00.817164 | orchestrator | 00:02:00.816 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=36d36d40-20d7-47cc-b7c9-d464c834b798] 2025-04-14 00:02:00.845951 | orchestrator | 00:02:00.845 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=1887503b-afdb-4bbd-8f23-7d9515a3500b] 2025-04-14 00:02:00.882601 | orchestrator | 00:02:00.882 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=ccac8439-b890-400f-ac52-c95bcf2715c5] 2025-04-14 00:02:01.016016 | orchestrator | 00:02:01.015 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=7f4afe13-6a95-42c2-aca0-a9e15c6c7aab] 2025-04-14 00:02:01.231432 | orchestrator | 00:02:01.231 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=9f6f08de-95e9-466a-aa72-e8d571cb7a0f] 2025-04-14 00:02:01.416563 | orchestrator | 00:02:01.416 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=aba87697-cd50-4f43-a7f3-dddf36044400] 2025-04-14 00:02:02.732741 | orchestrator | 00:02:02.732 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 7s [id=05ae044c-d3d5-4faa-a6cc-30284abac626] 2025-04-14 00:02:02.986531 | orchestrator | 00:02:02.986 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=c376956c-3585-4760-8b71-84bd0647a98e] 2025-04-14 00:02:03.013618 | orchestrator | 00:02:03.013 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-04-14 00:02:03.016571 | orchestrator | 00:02:03.016 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-04-14 00:02:03.018523 | orchestrator | 00:02:03.018 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-04-14 00:02:03.031863 | orchestrator | 00:02:03.031 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-04-14 00:02:03.035194 | orchestrator | 00:02:03.035 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-04-14 00:02:03.037343 | orchestrator | 00:02:03.037 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-04-14 00:02:03.038288 | orchestrator | 00:02:03.038 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-04-14 00:02:09.333800 | orchestrator | 00:02:09.333 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=bbbed13c-b8c8-494a-8139-9257fbebd8cb] 2025-04-14 00:02:09.342454 | orchestrator | 00:02:09.342 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-04-14 00:02:09.349313 | orchestrator | 00:02:09.349 STDOUT terraform: local_file.inventory: Creating... 2025-04-14 00:02:09.351295 | orchestrator | 00:02:09.351 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-04-14 00:02:09.357387 | orchestrator | 00:02:09.357 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b56fb3ef7631263abb06f398db36c39bd55aa328] 2025-04-14 00:02:09.358179 | orchestrator | 00:02:09.357 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=74fb6e0b6bbcb65dc4faf7b2ef747435101c3327] 2025-04-14 00:02:09.835176 | orchestrator | 00:02:09.834 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=bbbed13c-b8c8-494a-8139-9257fbebd8cb] 2025-04-14 00:02:13.022936 | orchestrator | 00:02:13.022 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-04-14 00:02:13.023103 | orchestrator | 00:02:13.022 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-04-14 00:02:13.033401 | orchestrator | 00:02:13.033 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-04-14 00:02:13.038510 | orchestrator | 00:02:13.038 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-04-14 00:02:13.039657 | orchestrator | 00:02:13.039 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-04-14 00:02:13.039730 | orchestrator | 00:02:13.039 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-04-14 00:02:23.025095 | orchestrator | 00:02:23.024 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-04-14 00:02:23.034292 | orchestrator | 00:02:23.024 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-04-14 00:02:23.034431 | orchestrator | 00:02:23.033 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-04-14 00:02:23.039985 | orchestrator | 00:02:23.039 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-04-14 00:02:23.040088 | orchestrator | 00:02:23.039 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-04-14 00:02:23.040111 | orchestrator | 00:02:23.039 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-04-14 00:02:23.547033 | orchestrator | 00:02:23.546 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=bbb67f1e-0e51-4357-9930-036eaec8d034] 2025-04-14 00:02:23.605142 | orchestrator | 00:02:23.604 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=54789686-1775-4f19-aa2d-d01eb1a4f856] 2025-04-14 00:02:24.038920 | orchestrator | 00:02:24.038 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=ef115895-3824-440e-a505-1216f84945c3] 2025-04-14 00:02:24.242588 | orchestrator | 00:02:24.242 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=1e7a41cb-6a35-426d-8f59-9f41f8b6d939] 2025-04-14 00:02:33.034481 | orchestrator | 00:02:33.034 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-04-14 00:02:33.040706 | orchestrator | 00:02:33.040 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-04-14 00:02:34.410511 | orchestrator | 00:02:34.410 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=e1df7583-cdd0-40e5-90b7-41adf59e33a9] 2025-04-14 00:02:34.563751 | orchestrator | 00:02:34.563 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 32s [id=f0fddb68-2404-40c1-89d1-644079f7426b] 2025-04-14 00:02:34.590850 | orchestrator | 00:02:34.590 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-04-14 00:02:34.606208 | orchestrator | 00:02:34.602 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-04-14 00:02:34.606881 | orchestrator | 00:02:34.603 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2158630292393969692] 2025-04-14 00:02:34.606902 | orchestrator | 00:02:34.606 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-04-14 00:02:34.608184 | orchestrator | 00:02:34.608 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-04-14 00:02:34.611145 | orchestrator | 00:02:34.611 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-04-14 00:02:34.613079 | orchestrator | 00:02:34.612 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creating... 2025-04-14 00:02:34.618439 | orchestrator | 00:02:34.618 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-04-14 00:02:34.632893 | orchestrator | 00:02:34.632 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creating... 2025-04-14 00:02:34.633359 | orchestrator | 00:02:34.632 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creating... 2025-04-14 00:02:34.640266 | orchestrator | 00:02:34.633 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-04-14 00:02:34.640342 | orchestrator | 00:02:34.639 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creating... 2025-04-14 00:02:39.941670 | orchestrator | 00:02:39.941 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=54789686-1775-4f19-aa2d-d01eb1a4f856/0623da07-2b86-4b0f-8ae6-479bebb1d3d2] 2025-04-14 00:02:39.954797 | orchestrator | 00:02:39.954 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=bbb67f1e-0e51-4357-9930-036eaec8d034/03a3c0ae-ae5b-4103-947a-830f0553055f] 2025-04-14 00:02:39.957307 | orchestrator | 00:02:39.957 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creating... 2025-04-14 00:02:39.972547 | orchestrator | 00:02:39.972 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-04-14 00:02:39.982429 | orchestrator | 00:02:39.982 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[12]: Creation complete after 5s [id=f0fddb68-2404-40c1-89d1-644079f7426b/61d8c1b1-8af8-4257-810b-e0715f81f0ca] 2025-04-14 00:02:39.982884 | orchestrator | 00:02:39.982 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=1e7a41cb-6a35-426d-8f59-9f41f8b6d939/ec5b891c-a93a-4443-952c-376a64ed5153] 2025-04-14 00:02:39.994077 | orchestrator | 00:02:39.993 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[13]: Creation complete after 5s [id=e1df7583-cdd0-40e5-90b7-41adf59e33a9/f4895685-066a-4248-b20c-4cd40b9ff210] 2025-04-14 00:02:40.002919 | orchestrator | 00:02:40.002 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creating... 2025-04-14 00:02:40.003170 | orchestrator | 00:02:40.003 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-04-14 00:02:40.006769 | orchestrator | 00:02:40.006 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-04-14 00:02:40.010436 | orchestrator | 00:02:40.010 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=f0fddb68-2404-40c1-89d1-644079f7426b/1d452a86-d7ed-4b7e-a6e2-8adfa0173156] 2025-04-14 00:02:40.015557 | orchestrator | 00:02:40.015 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[15]: Creation complete after 5s [id=54789686-1775-4f19-aa2d-d01eb1a4f856/938a8574-ab31-4693-953b-ad06db98cc0e] 2025-04-14 00:02:40.022500 | orchestrator | 00:02:40.020 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[16]: Creation complete after 5s [id=ef115895-3824-440e-a505-1216f84945c3/bda45bef-0c7e-4642-a586-327a75973f57] 2025-04-14 00:02:40.024189 | orchestrator | 00:02:40.024 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creating... 2025-04-14 00:02:40.030941 | orchestrator | 00:02:40.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creating... 2025-04-14 00:02:40.038402 | orchestrator | 00:02:40.038 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creating... 2025-04-14 00:02:40.044217 | orchestrator | 00:02:40.044 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=e1df7583-cdd0-40e5-90b7-41adf59e33a9/f216f5ad-8b9f-40bf-b892-25305f930110] 2025-04-14 00:02:40.056382 | orchestrator | 00:02:40.055 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=f0fddb68-2404-40c1-89d1-644079f7426b/318a826d-e453-41a1-9cbe-aee990c4d38b] 2025-04-14 00:02:40.066859 | orchestrator | 00:02:40.066 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-04-14 00:02:45.306105 | orchestrator | 00:02:45.305 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=1e7a41cb-6a35-426d-8f59-9f41f8b6d939/ff496d9e-d724-4be6-b701-ae323f1b3d4d] 2025-04-14 00:02:45.322315 | orchestrator | 00:02:45.322 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[9]: Creation complete after 5s [id=54789686-1775-4f19-aa2d-d01eb1a4f856/c26cfb84-2784-4068-ac39-279abdffc82e] 2025-04-14 00:02:45.333441 | orchestrator | 00:02:45.333 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=e1df7583-cdd0-40e5-90b7-41adf59e33a9/4c093f95-6486-49b6-be92-05fa28509200] 2025-04-14 00:02:45.348544 | orchestrator | 00:02:45.348 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=ef115895-3824-440e-a505-1216f84945c3/64225693-fc38-404b-a874-78411dc3466d] 2025-04-14 00:02:45.352518 | orchestrator | 00:02:45.352 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[11]: Creation complete after 5s [id=bbb67f1e-0e51-4357-9930-036eaec8d034/d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3] 2025-04-14 00:02:45.367180 | orchestrator | 00:02:45.366 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[14]: Creation complete after 5s [id=1e7a41cb-6a35-426d-8f59-9f41f8b6d939/fc318d73-efa9-4c13-b4ab-953b52f9b4b0] 2025-04-14 00:02:45.384388 | orchestrator | 00:02:45.384 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[10]: Creation complete after 5s [id=ef115895-3824-440e-a505-1216f84945c3/676c1686-7068-4aa0-a437-1ca2ad657cc9] 2025-04-14 00:02:45.388311 | orchestrator | 00:02:45.387 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[17]: Creation complete after 5s [id=bbb67f1e-0e51-4357-9930-036eaec8d034/4f96d1f1-65aa-443a-b2b5-a30371495496] 2025-04-14 00:02:50.068306 | orchestrator | 00:02:50.067 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-04-14 00:03:00.069492 | orchestrator | 00:03:00.069 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-04-14 00:03:00.696281 | orchestrator | 00:03:00.695 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=544fbc3f-9de7-4fff-8d45-274c2d7ebff5] 2025-04-14 00:03:00.712524 | orchestrator | 00:03:00.712 STDOUT terraform: Apply complete! Resources: 82 added, 0 changed, 0 destroyed. 2025-04-14 00:03:00.722074 | orchestrator | 00:03:00.712 STDOUT terraform: Outputs: 2025-04-14 00:03:00.722137 | orchestrator | 00:03:00.712 STDOUT terraform: manager_address = 2025-04-14 00:03:00.722167 | orchestrator | 00:03:00.712 STDOUT terraform: private_key = 2025-04-14 00:03:11.148270 | orchestrator | changed 2025-04-14 00:03:11.198360 | 2025-04-14 00:03:11.198604 | TASK [Fetch manager address] 2025-04-14 00:03:11.615674 | orchestrator | ok 2025-04-14 00:03:11.627440 | 2025-04-14 00:03:11.627570 | TASK [Set manager_host address] 2025-04-14 00:03:11.763149 | orchestrator | ok 2025-04-14 00:03:11.772230 | 2025-04-14 00:03:11.772352 | LOOP [Update ansible collections] 2025-04-14 00:03:12.477819 | orchestrator | changed 2025-04-14 00:03:13.205433 | orchestrator | changed 2025-04-14 00:03:13.232237 | 2025-04-14 00:03:13.232418 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-14 00:03:23.778421 | orchestrator | ok 2025-04-14 00:03:23.809290 | 2025-04-14 00:03:23.809423 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-14 00:04:23.862431 | orchestrator | ok 2025-04-14 00:04:23.875142 | 2025-04-14 00:04:23.875256 | TASK [Fetch manager ssh hostkey] 2025-04-14 00:04:24.955816 | orchestrator | Output suppressed because no_log was given 2025-04-14 00:04:24.976148 | 2025-04-14 00:04:24.976312 | TASK [Get ssh keypair from terraform environment] 2025-04-14 00:04:25.525466 | orchestrator | changed 2025-04-14 00:04:25.543064 | 2025-04-14 00:04:25.543219 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-14 00:04:25.585922 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-04-14 00:04:25.597255 | 2025-04-14 00:04:25.597383 | TASK [Run manager part 0] 2025-04-14 00:04:26.501573 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-14 00:04:26.545503 | orchestrator | 2025-04-14 00:04:28.458769 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-04-14 00:04:28.459034 | orchestrator | 2025-04-14 00:04:28.459110 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-04-14 00:04:28.459166 | orchestrator | ok: [testbed-manager] 2025-04-14 00:04:30.396712 | orchestrator | 2025-04-14 00:04:30.396773 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-14 00:04:30.396785 | orchestrator | 2025-04-14 00:04:30.396791 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:04:30.396803 | orchestrator | ok: [testbed-manager] 2025-04-14 00:04:31.094365 | orchestrator | 2025-04-14 00:04:31.094429 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-14 00:04:31.094447 | orchestrator | ok: [testbed-manager] 2025-04-14 00:04:31.136046 | orchestrator | 2025-04-14 00:04:31.136094 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-14 00:04:31.136109 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:04:31.171667 | orchestrator | 2025-04-14 00:04:31.171712 | orchestrator | TASK [Update package cache] **************************************************** 2025-04-14 00:04:31.171726 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:04:31.197410 | orchestrator | 2025-04-14 00:04:31.197460 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-14 00:04:31.197491 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:04:31.220618 | orchestrator | 2025-04-14 00:04:31.220669 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-14 00:04:31.220683 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:04:31.255702 | orchestrator | 2025-04-14 00:04:31.255813 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-14 00:04:31.255846 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:04:31.300229 | orchestrator | 2025-04-14 00:04:31.300345 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-04-14 00:04:31.300383 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:04:31.345982 | orchestrator | 2025-04-14 00:04:31.346383 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-04-14 00:04:31.346419 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:04:32.227059 | orchestrator | 2025-04-14 00:04:32.227128 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-04-14 00:04:32.227147 | orchestrator | changed: [testbed-manager] 2025-04-14 00:07:42.421891 | orchestrator | 2025-04-14 00:07:42.422052 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-04-14 00:07:42.422100 | orchestrator | changed: [testbed-manager] 2025-04-14 00:09:08.905826 | orchestrator | 2025-04-14 00:09:08.906010 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-14 00:09:08.906089 | orchestrator | changed: [testbed-manager] 2025-04-14 00:09:33.735975 | orchestrator | 2025-04-14 00:09:33.736165 | orchestrator | TASK [Install required packages] *********************************************** 2025-04-14 00:09:33.736204 | orchestrator | changed: [testbed-manager] 2025-04-14 00:09:44.688625 | orchestrator | 2025-04-14 00:09:44.688754 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-04-14 00:09:44.688791 | orchestrator | changed: [testbed-manager] 2025-04-14 00:09:44.733989 | orchestrator | 2025-04-14 00:09:44.734102 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-14 00:09:44.734148 | orchestrator | ok: [testbed-manager] 2025-04-14 00:09:45.574496 | orchestrator | 2025-04-14 00:09:45.574605 | orchestrator | TASK [Get current user] ******************************************************** 2025-04-14 00:09:45.574638 | orchestrator | ok: [testbed-manager] 2025-04-14 00:09:46.333312 | orchestrator | 2025-04-14 00:09:46.333471 | orchestrator | TASK [Create venv directory] *************************************************** 2025-04-14 00:09:46.333521 | orchestrator | changed: [testbed-manager] 2025-04-14 00:09:53.624826 | orchestrator | 2025-04-14 00:09:53.624938 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-04-14 00:09:53.625039 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:00.230632 | orchestrator | 2025-04-14 00:10:00.230786 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-04-14 00:10:00.230848 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:03.219096 | orchestrator | 2025-04-14 00:10:03.219738 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-04-14 00:10:03.219779 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:05.151765 | orchestrator | 2025-04-14 00:10:05.151873 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-04-14 00:10:05.151909 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:06.346245 | orchestrator | 2025-04-14 00:10:06.346335 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-04-14 00:10:06.346364 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-14 00:10:06.388624 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-14 00:10:06.388681 | orchestrator | 2025-04-14 00:10:06.388689 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-04-14 00:10:06.388703 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-14 00:10:09.668898 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-14 00:10:09.669017 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-14 00:10:09.669038 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-14 00:10:09.669068 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-04-14 00:10:10.249681 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-04-14 00:10:10.249824 | orchestrator | 2025-04-14 00:10:10.249847 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-04-14 00:10:10.249878 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:29.739914 | orchestrator | 2025-04-14 00:10:29.740009 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-04-14 00:10:29.740031 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-04-14 00:10:32.166523 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-04-14 00:10:32.166631 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-04-14 00:10:32.166652 | orchestrator | 2025-04-14 00:10:32.166670 | orchestrator | TASK [Install local collections] *********************************************** 2025-04-14 00:10:32.166700 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-04-14 00:10:33.591555 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-04-14 00:10:33.591651 | orchestrator | 2025-04-14 00:10:33.591670 | orchestrator | PLAY [Create operator user] **************************************************** 2025-04-14 00:10:33.591686 | orchestrator | 2025-04-14 00:10:33.591701 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:10:33.591729 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:33.638586 | orchestrator | 2025-04-14 00:10:33.638670 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-14 00:10:33.638698 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:33.702999 | orchestrator | 2025-04-14 00:10:33.703084 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-14 00:10:33.703114 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:34.489544 | orchestrator | 2025-04-14 00:10:34.490384 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-14 00:10:34.490426 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:35.306396 | orchestrator | 2025-04-14 00:10:35.306511 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-14 00:10:35.306539 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:36.750680 | orchestrator | 2025-04-14 00:10:36.750757 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-14 00:10:36.750787 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-04-14 00:10:38.196125 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-04-14 00:10:38.196182 | orchestrator | 2025-04-14 00:10:38.196193 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-14 00:10:38.196211 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:40.028529 | orchestrator | 2025-04-14 00:10:40.028585 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-14 00:10:40.028603 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-04-14 00:10:40.637978 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-04-14 00:10:40.638128 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-04-14 00:10:40.638149 | orchestrator | 2025-04-14 00:10:40.638163 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-14 00:10:40.638190 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:40.710390 | orchestrator | 2025-04-14 00:10:40.710497 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-14 00:10:40.710531 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:10:41.596819 | orchestrator | 2025-04-14 00:10:41.596970 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-14 00:10:41.597021 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:10:41.636457 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:41.636568 | orchestrator | 2025-04-14 00:10:41.636588 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-14 00:10:41.636620 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:10:41.675568 | orchestrator | 2025-04-14 00:10:41.675671 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-14 00:10:41.675705 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:10:41.713035 | orchestrator | 2025-04-14 00:10:41.713136 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-14 00:10:41.713170 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:10:41.765294 | orchestrator | 2025-04-14 00:10:41.765388 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-14 00:10:41.765419 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:10:42.505282 | orchestrator | 2025-04-14 00:10:42.505401 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-14 00:10:42.505454 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:43.942381 | orchestrator | 2025-04-14 00:10:43.942427 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-04-14 00:10:43.942434 | orchestrator | 2025-04-14 00:10:43.942440 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:10:43.942451 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:44.939876 | orchestrator | 2025-04-14 00:10:44.940724 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-04-14 00:10:44.940744 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:45.053969 | orchestrator | 2025-04-14 00:10:45.054162 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:10:45.054173 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-04-14 00:10:45.054178 | orchestrator | 2025-04-14 00:10:45.403410 | orchestrator | changed 2025-04-14 00:10:45.423582 | 2025-04-14 00:10:45.423719 | TASK [Point out that the log in on the manager is now possible] 2025-04-14 00:10:45.474072 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-04-14 00:10:45.485226 | 2025-04-14 00:10:45.485353 | TASK [Point out that the following task takes some time and does not give any output] 2025-04-14 00:10:45.519748 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-04-14 00:10:45.528350 | 2025-04-14 00:10:45.528489 | TASK [Run manager part 1 + 2] 2025-04-14 00:10:46.381445 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-04-14 00:10:46.436459 | orchestrator | 2025-04-14 00:10:48.965471 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-04-14 00:10:48.965540 | orchestrator | 2025-04-14 00:10:48.965561 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:10:48.965579 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:49.009079 | orchestrator | 2025-04-14 00:10:49.009157 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-04-14 00:10:49.009186 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:10:49.051386 | orchestrator | 2025-04-14 00:10:49.051456 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-04-14 00:10:49.051477 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:49.098750 | orchestrator | 2025-04-14 00:10:49.098811 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-14 00:10:49.098831 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:49.163291 | orchestrator | 2025-04-14 00:10:49.163356 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-14 00:10:49.163376 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:49.238475 | orchestrator | 2025-04-14 00:10:49.238542 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-14 00:10:49.238563 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:49.288726 | orchestrator | 2025-04-14 00:10:49.288779 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-14 00:10:49.288794 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-04-14 00:10:50.018162 | orchestrator | 2025-04-14 00:10:50.018237 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-14 00:10:50.018258 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:50.067694 | orchestrator | 2025-04-14 00:10:50.067758 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-14 00:10:50.067776 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:10:51.477207 | orchestrator | 2025-04-14 00:10:51.477280 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-14 00:10:51.477307 | orchestrator | changed: [testbed-manager] 2025-04-14 00:10:52.082902 | orchestrator | 2025-04-14 00:10:52.083028 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-14 00:10:52.083051 | orchestrator | ok: [testbed-manager] 2025-04-14 00:10:53.321317 | orchestrator | 2025-04-14 00:10:53.321386 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-14 00:10:53.321409 | orchestrator | changed: [testbed-manager] 2025-04-14 00:11:06.891908 | orchestrator | 2025-04-14 00:11:06.892025 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-14 00:11:06.892056 | orchestrator | changed: [testbed-manager] 2025-04-14 00:11:07.620256 | orchestrator | 2025-04-14 00:11:07.620820 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-04-14 00:11:07.620866 | orchestrator | ok: [testbed-manager] 2025-04-14 00:11:07.672350 | orchestrator | 2025-04-14 00:11:07.672449 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-04-14 00:11:07.672480 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:11:08.685334 | orchestrator | 2025-04-14 00:11:08.685407 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-04-14 00:11:08.685431 | orchestrator | changed: [testbed-manager] 2025-04-14 00:11:09.674150 | orchestrator | 2025-04-14 00:11:09.674294 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-04-14 00:11:09.674332 | orchestrator | changed: [testbed-manager] 2025-04-14 00:11:10.260352 | orchestrator | 2025-04-14 00:11:10.260435 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-04-14 00:11:10.260466 | orchestrator | changed: [testbed-manager] 2025-04-14 00:11:10.308739 | orchestrator | 2025-04-14 00:11:10.308857 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-04-14 00:11:10.308899 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-04-14 00:11:12.733072 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-04-14 00:11:12.733169 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-04-14 00:11:12.733183 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-04-14 00:11:12.733207 | orchestrator | changed: [testbed-manager] 2025-04-14 00:11:22.484573 | orchestrator | 2025-04-14 00:11:22.484637 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-04-14 00:11:22.484657 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-04-14 00:11:23.981787 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-04-14 00:11:23.981873 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-04-14 00:11:23.981891 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-04-14 00:11:23.981927 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-04-14 00:11:23.981943 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-04-14 00:11:23.981958 | orchestrator | 2025-04-14 00:11:23.981973 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-04-14 00:11:23.982051 | orchestrator | changed: [testbed-manager] 2025-04-14 00:11:24.019318 | orchestrator | 2025-04-14 00:11:24.019393 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-04-14 00:11:24.019421 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:11:27.297250 | orchestrator | 2025-04-14 00:11:27.297332 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-04-14 00:11:27.297365 | orchestrator | changed: [testbed-manager] 2025-04-14 00:11:27.337485 | orchestrator | 2025-04-14 00:11:27.337566 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-04-14 00:11:27.337594 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:13:12.066234 | orchestrator | 2025-04-14 00:13:12.066403 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-04-14 00:13:12.066442 | orchestrator | changed: [testbed-manager] 2025-04-14 00:13:13.246137 | orchestrator | 2025-04-14 00:13:13.246208 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-14 00:13:13.246233 | orchestrator | ok: [testbed-manager] 2025-04-14 00:13:13.339049 | orchestrator | 2025-04-14 00:13:13.339240 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:13:13.339254 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-04-14 00:13:13.339260 | orchestrator | 2025-04-14 00:13:13.670326 | orchestrator | changed 2025-04-14 00:13:13.692097 | 2025-04-14 00:13:13.692248 | TASK [Reboot manager] 2025-04-14 00:13:15.237686 | orchestrator | changed 2025-04-14 00:13:15.257713 | 2025-04-14 00:13:15.257908 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-04-14 00:13:31.648976 | orchestrator | ok 2025-04-14 00:13:31.660711 | 2025-04-14 00:13:31.660891 | TASK [Wait a little longer for the manager so that everything is ready] 2025-04-14 00:14:31.717192 | orchestrator | ok 2025-04-14 00:14:31.728221 | 2025-04-14 00:14:31.728341 | TASK [Deploy manager + bootstrap nodes] 2025-04-14 00:14:34.262389 | orchestrator | 2025-04-14 00:14:34.265286 | orchestrator | # DEPLOY MANAGER 2025-04-14 00:14:34.265355 | orchestrator | 2025-04-14 00:14:34.265373 | orchestrator | + set -e 2025-04-14 00:14:34.265418 | orchestrator | + echo 2025-04-14 00:14:34.265435 | orchestrator | + echo '# DEPLOY MANAGER' 2025-04-14 00:14:34.265450 | orchestrator | + echo 2025-04-14 00:14:34.265471 | orchestrator | + cat /opt/manager-vars.sh 2025-04-14 00:14:34.265506 | orchestrator | export NUMBER_OF_NODES=6 2025-04-14 00:14:34.266216 | orchestrator | 2025-04-14 00:14:34.266294 | orchestrator | export CEPH_VERSION=quincy 2025-04-14 00:14:34.266302 | orchestrator | export CONFIGURATION_VERSION=main 2025-04-14 00:14:34.266308 | orchestrator | export MANAGER_VERSION=8.1.0 2025-04-14 00:14:34.266313 | orchestrator | export OPENSTACK_VERSION=2024.1 2025-04-14 00:14:34.266318 | orchestrator | 2025-04-14 00:14:34.266325 | orchestrator | export ARA=false 2025-04-14 00:14:34.266330 | orchestrator | export TEMPEST=false 2025-04-14 00:14:34.266336 | orchestrator | export IS_ZUUL=true 2025-04-14 00:14:34.266340 | orchestrator | 2025-04-14 00:14:34.266345 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-04-14 00:14:34.266351 | orchestrator | export EXTERNAL_API=false 2025-04-14 00:14:34.266356 | orchestrator | 2025-04-14 00:14:34.266361 | orchestrator | export IMAGE_USER=ubuntu 2025-04-14 00:14:34.266366 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-04-14 00:14:34.266371 | orchestrator | 2025-04-14 00:14:34.266376 | orchestrator | export CEPH_STACK=ceph-ansible 2025-04-14 00:14:34.266390 | orchestrator | 2025-04-14 00:14:34.266847 | orchestrator | + echo 2025-04-14 00:14:34.266856 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-14 00:14:34.266864 | orchestrator | ++ export INTERACTIVE=false 2025-04-14 00:14:34.267162 | orchestrator | ++ INTERACTIVE=false 2025-04-14 00:14:34.267171 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-14 00:14:34.267185 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-14 00:14:34.267192 | orchestrator | + source /opt/manager-vars.sh 2025-04-14 00:14:34.267290 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-14 00:14:34.267571 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-14 00:14:34.267622 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-14 00:14:34.267649 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-14 00:14:34.267970 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-14 00:14:34.267990 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-14 00:14:34.268012 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-14 00:14:34.268026 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-14 00:14:34.268038 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-14 00:14:34.268050 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-14 00:14:34.268062 | orchestrator | ++ export ARA=false 2025-04-14 00:14:34.268074 | orchestrator | ++ ARA=false 2025-04-14 00:14:34.268087 | orchestrator | ++ export TEMPEST=false 2025-04-14 00:14:34.268099 | orchestrator | ++ TEMPEST=false 2025-04-14 00:14:34.268111 | orchestrator | ++ export IS_ZUUL=true 2025-04-14 00:14:34.268123 | orchestrator | ++ IS_ZUUL=true 2025-04-14 00:14:34.268135 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-04-14 00:14:34.268148 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-04-14 00:14:34.268167 | orchestrator | ++ export EXTERNAL_API=false 2025-04-14 00:14:34.268180 | orchestrator | ++ EXTERNAL_API=false 2025-04-14 00:14:34.268192 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-14 00:14:34.268204 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-14 00:14:34.268216 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-14 00:14:34.268228 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-14 00:14:34.268258 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-14 00:14:34.268281 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-14 00:14:34.268299 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-04-14 00:14:34.334808 | orchestrator | + docker version 2025-04-14 00:14:34.588205 | orchestrator | Client: Docker Engine - Community 2025-04-14 00:14:34.591269 | orchestrator | Version: 26.1.4 2025-04-14 00:14:34.591329 | orchestrator | API version: 1.45 2025-04-14 00:14:34.591345 | orchestrator | Go version: go1.21.11 2025-04-14 00:14:34.591359 | orchestrator | Git commit: 5650f9b 2025-04-14 00:14:34.591378 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-14 00:14:34.591405 | orchestrator | OS/Arch: linux/amd64 2025-04-14 00:14:34.591433 | orchestrator | Context: default 2025-04-14 00:14:34.591460 | orchestrator | 2025-04-14 00:14:34.591488 | orchestrator | Server: Docker Engine - Community 2025-04-14 00:14:34.591511 | orchestrator | Engine: 2025-04-14 00:14:34.591538 | orchestrator | Version: 26.1.4 2025-04-14 00:14:34.591564 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-04-14 00:14:34.591590 | orchestrator | Go version: go1.21.11 2025-04-14 00:14:34.591629 | orchestrator | Git commit: de5c9cf 2025-04-14 00:14:34.591679 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-04-14 00:14:34.591694 | orchestrator | OS/Arch: linux/amd64 2025-04-14 00:14:34.591708 | orchestrator | Experimental: false 2025-04-14 00:14:34.591722 | orchestrator | containerd: 2025-04-14 00:14:34.591736 | orchestrator | Version: 1.7.27 2025-04-14 00:14:34.591750 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-04-14 00:14:34.591765 | orchestrator | runc: 2025-04-14 00:14:34.591779 | orchestrator | Version: 1.2.5 2025-04-14 00:14:34.591794 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-04-14 00:14:34.591807 | orchestrator | docker-init: 2025-04-14 00:14:34.591821 | orchestrator | Version: 0.19.0 2025-04-14 00:14:34.591861 | orchestrator | GitCommit: de40ad0 2025-04-14 00:14:34.591887 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-04-14 00:14:34.600955 | orchestrator | + set -e 2025-04-14 00:14:34.601169 | orchestrator | + source /opt/manager-vars.sh 2025-04-14 00:14:34.601210 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-14 00:14:34.601689 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-14 00:14:34.601709 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-14 00:14:34.601723 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-14 00:14:34.601737 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-14 00:14:34.601751 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-14 00:14:34.601765 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-14 00:14:34.601779 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-14 00:14:34.601793 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-14 00:14:34.601807 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-14 00:14:34.601820 | orchestrator | ++ export ARA=false 2025-04-14 00:14:34.601856 | orchestrator | ++ ARA=false 2025-04-14 00:14:34.601871 | orchestrator | ++ export TEMPEST=false 2025-04-14 00:14:34.601884 | orchestrator | ++ TEMPEST=false 2025-04-14 00:14:34.601898 | orchestrator | ++ export IS_ZUUL=true 2025-04-14 00:14:34.601912 | orchestrator | ++ IS_ZUUL=true 2025-04-14 00:14:34.601926 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-04-14 00:14:34.601940 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-04-14 00:14:34.601955 | orchestrator | ++ export EXTERNAL_API=false 2025-04-14 00:14:34.601969 | orchestrator | ++ EXTERNAL_API=false 2025-04-14 00:14:34.601982 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-14 00:14:34.601995 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-14 00:14:34.602009 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-14 00:14:34.602069 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-14 00:14:34.602085 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-14 00:14:34.602098 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-14 00:14:34.602112 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-14 00:14:34.602134 | orchestrator | ++ export INTERACTIVE=false 2025-04-14 00:14:34.602148 | orchestrator | ++ INTERACTIVE=false 2025-04-14 00:14:34.602167 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-14 00:14:34.602181 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-14 00:14:34.602200 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-14 00:14:34.610355 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-04-14 00:14:34.610427 | orchestrator | + set -e 2025-04-14 00:14:34.618192 | orchestrator | + VERSION=8.1.0 2025-04-14 00:14:34.618220 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-04-14 00:14:34.618244 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-14 00:14:34.623855 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-14 00:14:34.623908 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-04-14 00:14:34.628079 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-04-14 00:14:34.636394 | orchestrator | /opt/configuration ~ 2025-04-14 00:14:34.638997 | orchestrator | + set -e 2025-04-14 00:14:34.639028 | orchestrator | + pushd /opt/configuration 2025-04-14 00:14:34.639038 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-14 00:14:34.639053 | orchestrator | + source /opt/venv/bin/activate 2025-04-14 00:14:34.640100 | orchestrator | ++ deactivate nondestructive 2025-04-14 00:14:34.640148 | orchestrator | ++ '[' -n '' ']' 2025-04-14 00:14:34.640157 | orchestrator | ++ '[' -n '' ']' 2025-04-14 00:14:34.640167 | orchestrator | ++ hash -r 2025-04-14 00:14:34.640175 | orchestrator | ++ '[' -n '' ']' 2025-04-14 00:14:34.640183 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-14 00:14:34.640192 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-14 00:14:34.640200 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-14 00:14:34.640228 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-14 00:14:34.640237 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-14 00:14:34.640246 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-14 00:14:34.640262 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-14 00:14:34.640277 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-14 00:14:34.640295 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-14 00:14:34.640546 | orchestrator | ++ export PATH 2025-04-14 00:14:34.640561 | orchestrator | ++ '[' -n '' ']' 2025-04-14 00:14:34.640570 | orchestrator | ++ '[' -z '' ']' 2025-04-14 00:14:34.640582 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-14 00:14:35.800133 | orchestrator | ++ PS1='(venv) ' 2025-04-14 00:14:35.800262 | orchestrator | ++ export PS1 2025-04-14 00:14:35.800282 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-14 00:14:35.800299 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-14 00:14:35.800314 | orchestrator | ++ hash -r 2025-04-14 00:14:35.800330 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-04-14 00:14:35.800368 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-04-14 00:14:35.800611 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-04-14 00:14:35.802155 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-04-14 00:14:35.803408 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-04-14 00:14:35.804524 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (24.2) 2025-04-14 00:14:35.814463 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.1.8) 2025-04-14 00:14:35.815933 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-04-14 00:14:35.817075 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-04-14 00:14:35.818409 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-04-14 00:14:35.850727 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.1) 2025-04-14 00:14:35.852159 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-04-14 00:14:35.853684 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-04-14 00:14:35.855125 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.1.31) 2025-04-14 00:14:35.859213 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-04-14 00:14:36.084177 | orchestrator | ++ which gilt 2025-04-14 00:14:36.086470 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-04-14 00:14:36.323742 | orchestrator | + /opt/venv/bin/gilt overlay 2025-04-14 00:14:36.323967 | orchestrator | osism.cfg-generics: 2025-04-14 00:14:37.894372 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-04-14 00:14:37.894539 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-04-14 00:14:38.850371 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-04-14 00:14:38.850495 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-04-14 00:14:38.850512 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-04-14 00:14:38.850542 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-04-14 00:14:38.861595 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-04-14 00:14:39.222408 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-04-14 00:14:39.282684 | orchestrator | ~ 2025-04-14 00:14:39.284286 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-14 00:14:39.284317 | orchestrator | + deactivate 2025-04-14 00:14:39.284352 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-14 00:14:39.284369 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-14 00:14:39.284383 | orchestrator | + export PATH 2025-04-14 00:14:39.284397 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-14 00:14:39.284411 | orchestrator | + '[' -n '' ']' 2025-04-14 00:14:39.284425 | orchestrator | + hash -r 2025-04-14 00:14:39.284439 | orchestrator | + '[' -n '' ']' 2025-04-14 00:14:39.284453 | orchestrator | + unset VIRTUAL_ENV 2025-04-14 00:14:39.284467 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-14 00:14:39.284481 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-14 00:14:39.284498 | orchestrator | + unset -f deactivate 2025-04-14 00:14:39.284512 | orchestrator | + popd 2025-04-14 00:14:39.284533 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-14 00:14:39.284592 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-04-14 00:14:39.284611 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-14 00:14:39.338907 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-14 00:14:39.382819 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-04-14 00:14:39.382978 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-04-14 00:14:39.383029 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-14 00:14:40.876658 | orchestrator | + source /opt/venv/bin/activate 2025-04-14 00:14:40.876791 | orchestrator | ++ deactivate nondestructive 2025-04-14 00:14:40.876812 | orchestrator | ++ '[' -n '' ']' 2025-04-14 00:14:40.876854 | orchestrator | ++ '[' -n '' ']' 2025-04-14 00:14:40.876917 | orchestrator | ++ hash -r 2025-04-14 00:14:40.876934 | orchestrator | ++ '[' -n '' ']' 2025-04-14 00:14:40.876963 | orchestrator | ++ unset VIRTUAL_ENV 2025-04-14 00:14:40.876977 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-04-14 00:14:40.876992 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-04-14 00:14:40.877006 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-04-14 00:14:40.877020 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-04-14 00:14:40.877034 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-04-14 00:14:40.877048 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-04-14 00:14:40.877063 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-14 00:14:40.877077 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-14 00:14:40.877091 | orchestrator | ++ export PATH 2025-04-14 00:14:40.877105 | orchestrator | ++ '[' -n '' ']' 2025-04-14 00:14:40.877118 | orchestrator | ++ '[' -z '' ']' 2025-04-14 00:14:40.877132 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-04-14 00:14:40.877146 | orchestrator | ++ PS1='(venv) ' 2025-04-14 00:14:40.877159 | orchestrator | ++ export PS1 2025-04-14 00:14:40.877173 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-04-14 00:14:40.877187 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-04-14 00:14:40.877206 | orchestrator | ++ hash -r 2025-04-14 00:14:40.877221 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-04-14 00:14:40.877254 | orchestrator | 2025-04-14 00:14:41.468507 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-04-14 00:14:41.468632 | orchestrator | 2025-04-14 00:14:41.468653 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-14 00:14:41.468683 | orchestrator | ok: [testbed-manager] 2025-04-14 00:14:42.575644 | orchestrator | 2025-04-14 00:14:42.575888 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-14 00:14:42.575971 | orchestrator | changed: [testbed-manager] 2025-04-14 00:14:45.065252 | orchestrator | 2025-04-14 00:14:45.065380 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-04-14 00:14:45.065400 | orchestrator | 2025-04-14 00:14:45.065415 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:14:45.065447 | orchestrator | ok: [testbed-manager] 2025-04-14 00:14:50.561458 | orchestrator | 2025-04-14 00:14:50.561602 | orchestrator | TASK [Pull images] ************************************************************* 2025-04-14 00:14:50.561674 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-04-14 00:16:17.447104 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/mariadb:11.6.2) 2025-04-14 00:16:17.447249 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-04-14 00:16:17.447270 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-04-14 00:16:17.447285 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-04-14 00:16:17.447301 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/redis:7.4.1-alpine) 2025-04-14 00:16:17.447315 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-04-14 00:16:17.447330 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-04-14 00:16:17.447344 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-04-14 00:16:17.447366 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/postgres:16.6-alpine) 2025-04-14 00:16:17.447381 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/library/traefik:v3.2.1) 2025-04-14 00:16:17.447395 | orchestrator | changed: [testbed-manager] => (item=index.docker.io/hashicorp/vault:1.18.2) 2025-04-14 00:16:17.447409 | orchestrator | 2025-04-14 00:16:17.447424 | orchestrator | TASK [Check status] ************************************************************ 2025-04-14 00:16:17.447455 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-14 00:16:17.497599 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-04-14 00:16:17.497703 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-04-14 00:16:17.497717 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-04-14 00:16:17.497729 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (116 retries left). 2025-04-14 00:16:17.497745 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j605995368122.1586', 'results_file': '/home/dragon/.ansible_async/j605995368122.1586', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497796 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j817601535326.1611', 'results_file': '/home/dragon/.ansible_async/j817601535326.1611', 'changed': True, 'item': 'index.docker.io/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497811 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-14 00:16:17.497823 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j179141800188.1636', 'results_file': '/home/dragon/.ansible_async/j179141800188.1636', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497840 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j348096203626.1668', 'results_file': '/home/dragon/.ansible_async/j348096203626.1668', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497855 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-14 00:16:17.497867 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j224782130425.1701', 'results_file': '/home/dragon/.ansible_async/j224782130425.1701', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497878 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j130670779451.1740', 'results_file': '/home/dragon/.ansible_async/j130670779451.1740', 'changed': True, 'item': 'index.docker.io/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497920 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-04-14 00:16:17.497932 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j480432594958.1765', 'results_file': '/home/dragon/.ansible_async/j480432594958.1765', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497943 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j917101637398.1798', 'results_file': '/home/dragon/.ansible_async/j917101637398.1798', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497954 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j53130824730.1831', 'results_file': '/home/dragon/.ansible_async/j53130824730.1831', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497965 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j524070920236.1865', 'results_file': '/home/dragon/.ansible_async/j524070920236.1865', 'changed': True, 'item': 'index.docker.io/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497976 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j423497422989.1898', 'results_file': '/home/dragon/.ansible_async/j423497422989.1898', 'changed': True, 'item': 'index.docker.io/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497987 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j81504506467.1930', 'results_file': '/home/dragon/.ansible_async/j81504506467.1930', 'changed': True, 'item': 'index.docker.io/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-04-14 00:16:17.497998 | orchestrator | 2025-04-14 00:16:17.498010 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-04-14 00:16:17.498079 | orchestrator | ok: [testbed-manager] 2025-04-14 00:16:17.995675 | orchestrator | 2025-04-14 00:16:17.995843 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-04-14 00:16:17.995881 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:18.343560 | orchestrator | 2025-04-14 00:16:18.343684 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-04-14 00:16:18.343734 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:18.748735 | orchestrator | 2025-04-14 00:16:18.748969 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-04-14 00:16:18.749029 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:18.807634 | orchestrator | 2025-04-14 00:16:18.807801 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-04-14 00:16:18.807856 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:16:19.146709 | orchestrator | 2025-04-14 00:16:19.146896 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-04-14 00:16:19.146937 | orchestrator | ok: [testbed-manager] 2025-04-14 00:16:19.322640 | orchestrator | 2025-04-14 00:16:19.322843 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-04-14 00:16:19.322883 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:16:21.268060 | orchestrator | 2025-04-14 00:16:21.268204 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-04-14 00:16:21.268225 | orchestrator | 2025-04-14 00:16:21.268241 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:16:21.268275 | orchestrator | ok: [testbed-manager] 2025-04-14 00:16:21.486464 | orchestrator | 2025-04-14 00:16:21.486607 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-04-14 00:16:21.486646 | orchestrator | 2025-04-14 00:16:21.586642 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-04-14 00:16:21.586926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-04-14 00:16:22.755835 | orchestrator | 2025-04-14 00:16:22.755967 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-04-14 00:16:22.756003 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-04-14 00:16:24.680703 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-04-14 00:16:24.680912 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-04-14 00:16:24.680946 | orchestrator | 2025-04-14 00:16:24.680968 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-04-14 00:16:24.680999 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-04-14 00:16:25.396594 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-04-14 00:16:25.396747 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-04-14 00:16:25.396792 | orchestrator | 2025-04-14 00:16:25.396809 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-04-14 00:16:25.396842 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:16:26.124651 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:26.124831 | orchestrator | 2025-04-14 00:16:26.124856 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-04-14 00:16:26.124888 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:16:26.205663 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:26.205859 | orchestrator | 2025-04-14 00:16:26.205893 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-04-14 00:16:26.205940 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:16:26.626639 | orchestrator | 2025-04-14 00:16:26.626751 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-04-14 00:16:26.626829 | orchestrator | ok: [testbed-manager] 2025-04-14 00:16:26.745941 | orchestrator | 2025-04-14 00:16:26.746100 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-04-14 00:16:26.746133 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-04-14 00:16:27.852018 | orchestrator | 2025-04-14 00:16:27.852164 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-04-14 00:16:27.853038 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:28.854281 | orchestrator | 2025-04-14 00:16:28.854415 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-04-14 00:16:28.854453 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:32.406118 | orchestrator | 2025-04-14 00:16:32.406244 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-04-14 00:16:32.406282 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:32.746959 | orchestrator | 2025-04-14 00:16:32.747110 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-04-14 00:16:32.747151 | orchestrator | 2025-04-14 00:16:32.864830 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-04-14 00:16:32.864958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-04-14 00:16:35.615039 | orchestrator | 2025-04-14 00:16:35.615165 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-04-14 00:16:35.615199 | orchestrator | ok: [testbed-manager] 2025-04-14 00:16:35.775972 | orchestrator | 2025-04-14 00:16:35.776100 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-14 00:16:35.776136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-04-14 00:16:36.981232 | orchestrator | 2025-04-14 00:16:36.981357 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-04-14 00:16:36.981394 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-04-14 00:16:37.101127 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-04-14 00:16:37.101248 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-04-14 00:16:37.101267 | orchestrator | 2025-04-14 00:16:37.101282 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-04-14 00:16:37.101314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-04-14 00:16:37.763625 | orchestrator | 2025-04-14 00:16:37.763739 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-04-14 00:16:37.763819 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-04-14 00:16:38.494295 | orchestrator | 2025-04-14 00:16:38.494421 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-04-14 00:16:38.494469 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:39.186279 | orchestrator | 2025-04-14 00:16:39.186402 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-14 00:16:39.186438 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:16:39.654689 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:39.654810 | orchestrator | 2025-04-14 00:16:39.654820 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-04-14 00:16:39.654838 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:40.163715 | orchestrator | 2025-04-14 00:16:40.163891 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-04-14 00:16:40.163929 | orchestrator | ok: [testbed-manager] 2025-04-14 00:16:40.225308 | orchestrator | 2025-04-14 00:16:40.225418 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-04-14 00:16:40.225448 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:16:40.891340 | orchestrator | 2025-04-14 00:16:40.891471 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-04-14 00:16:40.891507 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:41.000906 | orchestrator | 2025-04-14 00:16:41.001015 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-04-14 00:16:41.001046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-04-14 00:16:41.853559 | orchestrator | 2025-04-14 00:16:41.853689 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-04-14 00:16:41.853726 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-04-14 00:16:42.579656 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-04-14 00:16:42.579824 | orchestrator | 2025-04-14 00:16:42.579846 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-04-14 00:16:42.579879 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-04-14 00:16:43.322501 | orchestrator | 2025-04-14 00:16:43.322626 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-04-14 00:16:43.322661 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:43.387207 | orchestrator | 2025-04-14 00:16:43.387324 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-04-14 00:16:43.387361 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:16:44.074785 | orchestrator | 2025-04-14 00:16:44.074902 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-04-14 00:16:44.074933 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:45.967301 | orchestrator | 2025-04-14 00:16:45.967426 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-04-14 00:16:45.967463 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:16:52.523671 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:16:52.523867 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:16:52.523892 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:52.523908 | orchestrator | 2025-04-14 00:16:52.523923 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-04-14 00:16:52.523955 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-04-14 00:16:53.264506 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-04-14 00:16:53.264629 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-04-14 00:16:53.264648 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-04-14 00:16:53.264664 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-04-14 00:16:53.264679 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-04-14 00:16:53.264694 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-04-14 00:16:53.264735 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-04-14 00:16:53.264749 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-04-14 00:16:53.264807 | orchestrator | changed: [testbed-manager] => (item=users) 2025-04-14 00:16:53.264822 | orchestrator | 2025-04-14 00:16:53.264837 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-04-14 00:16:53.264870 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-04-14 00:16:53.458278 | orchestrator | 2025-04-14 00:16:53.458381 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-04-14 00:16:53.458407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-04-14 00:16:54.227981 | orchestrator | 2025-04-14 00:16:54.228101 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-04-14 00:16:54.228138 | orchestrator | changed: [testbed-manager] 2025-04-14 00:16:54.966232 | orchestrator | 2025-04-14 00:16:54.966357 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-04-14 00:16:54.966394 | orchestrator | ok: [testbed-manager] 2025-04-14 00:16:55.755537 | orchestrator | 2025-04-14 00:16:55.755660 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-04-14 00:16:55.755698 | orchestrator | changed: [testbed-manager] 2025-04-14 00:17:01.446410 | orchestrator | 2025-04-14 00:17:01.446542 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-04-14 00:17:01.446576 | orchestrator | changed: [testbed-manager] 2025-04-14 00:17:02.502823 | orchestrator | 2025-04-14 00:17:02.502957 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-04-14 00:17:02.502996 | orchestrator | ok: [testbed-manager] 2025-04-14 00:17:24.805891 | orchestrator | 2025-04-14 00:17:24.806091 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-04-14 00:17:24.806133 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-04-14 00:17:24.900259 | orchestrator | ok: [testbed-manager] 2025-04-14 00:17:24.900358 | orchestrator | 2025-04-14 00:17:24.900376 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-04-14 00:17:24.900407 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:17:24.972484 | orchestrator | 2025-04-14 00:17:24.972587 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-04-14 00:17:24.972603 | orchestrator | 2025-04-14 00:17:24.972617 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-04-14 00:17:24.972643 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:17:25.074285 | orchestrator | 2025-04-14 00:17:25.074396 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-14 00:17:25.074430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-04-14 00:17:25.966581 | orchestrator | 2025-04-14 00:17:25.966706 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-04-14 00:17:25.966778 | orchestrator | ok: [testbed-manager] 2025-04-14 00:17:26.077711 | orchestrator | 2025-04-14 00:17:26.077885 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-04-14 00:17:26.077925 | orchestrator | ok: [testbed-manager] 2025-04-14 00:17:26.134939 | orchestrator | 2025-04-14 00:17:26.135058 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-04-14 00:17:26.135092 | orchestrator | ok: [testbed-manager] => { 2025-04-14 00:17:26.830183 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-04-14 00:17:26.830309 | orchestrator | } 2025-04-14 00:17:26.830329 | orchestrator | 2025-04-14 00:17:26.830345 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-04-14 00:17:26.830375 | orchestrator | ok: [testbed-manager] 2025-04-14 00:17:27.820086 | orchestrator | 2025-04-14 00:17:27.820242 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-04-14 00:17:27.820284 | orchestrator | ok: [testbed-manager] 2025-04-14 00:17:27.932480 | orchestrator | 2025-04-14 00:17:27.932598 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-04-14 00:17:27.932647 | orchestrator | ok: [testbed-manager] 2025-04-14 00:17:27.996811 | orchestrator | 2025-04-14 00:17:27.996943 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-04-14 00:17:27.997006 | orchestrator | ok: [testbed-manager] => { 2025-04-14 00:17:28.074285 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-04-14 00:17:28.074397 | orchestrator | } 2025-04-14 00:17:28.074415 | orchestrator | 2025-04-14 00:17:28.074431 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-04-14 00:17:28.074461 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:17:28.166303 | orchestrator | 2025-04-14 00:17:28.166399 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-04-14 00:17:28.166431 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:17:28.247998 | orchestrator | 2025-04-14 00:17:28.248061 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-04-14 00:17:28.248087 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:17:28.328935 | orchestrator | 2025-04-14 00:17:28.328995 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-04-14 00:17:28.329021 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:17:28.395250 | orchestrator | 2025-04-14 00:17:28.395334 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-04-14 00:17:28.395363 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:17:28.477943 | orchestrator | 2025-04-14 00:17:28.478112 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-04-14 00:17:28.478149 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:17:30.009271 | orchestrator | 2025-04-14 00:17:30.009405 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-04-14 00:17:30.009445 | orchestrator | changed: [testbed-manager] 2025-04-14 00:17:30.132246 | orchestrator | 2025-04-14 00:17:30.132375 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-04-14 00:17:30.132410 | orchestrator | ok: [testbed-manager] 2025-04-14 00:18:30.218826 | orchestrator | 2025-04-14 00:18:30.218962 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-04-14 00:18:30.218998 | orchestrator | Pausing for 60 seconds 2025-04-14 00:18:30.336234 | orchestrator | changed: [testbed-manager] 2025-04-14 00:18:30.336351 | orchestrator | 2025-04-14 00:18:30.336370 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-04-14 00:18:30.336402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-04-14 00:23:14.427690 | orchestrator | 2025-04-14 00:23:14.427822 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-04-14 00:23:14.427857 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-04-14 00:23:16.658962 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-04-14 00:23:16.659083 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-04-14 00:23:16.659102 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-04-14 00:23:16.659117 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-04-14 00:23:16.659131 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-04-14 00:23:16.659145 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-04-14 00:23:16.659159 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-04-14 00:23:16.659173 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-04-14 00:23:16.659186 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-04-14 00:23:16.659232 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-04-14 00:23:16.659247 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-04-14 00:23:16.659260 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-04-14 00:23:16.659275 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-04-14 00:23:16.659288 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-04-14 00:23:16.659302 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-04-14 00:23:16.659316 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-04-14 00:23:16.659329 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-04-14 00:23:16.659343 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-04-14 00:23:16.659370 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-04-14 00:23:16.659384 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-04-14 00:23:16.659398 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-04-14 00:23:16.659412 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-04-14 00:23:16.659426 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-04-14 00:23:16.659439 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (36 retries left). 2025-04-14 00:23:16.659453 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (35 retries left). 2025-04-14 00:23:16.659466 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (34 retries left). 2025-04-14 00:23:16.659480 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:16.659495 | orchestrator | 2025-04-14 00:23:16.659510 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-04-14 00:23:16.659524 | orchestrator | 2025-04-14 00:23:16.659538 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:23:16.659569 | orchestrator | ok: [testbed-manager] 2025-04-14 00:23:16.786735 | orchestrator | 2025-04-14 00:23:16.786852 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-04-14 00:23:16.786887 | orchestrator | 2025-04-14 00:23:16.845770 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-04-14 00:23:16.845870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-04-14 00:23:18.724489 | orchestrator | 2025-04-14 00:23:18.724661 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-04-14 00:23:18.724696 | orchestrator | ok: [testbed-manager] 2025-04-14 00:23:18.791400 | orchestrator | 2025-04-14 00:23:18.791549 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-04-14 00:23:18.791625 | orchestrator | ok: [testbed-manager] 2025-04-14 00:23:18.910803 | orchestrator | 2025-04-14 00:23:18.910937 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-04-14 00:23:18.910981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-04-14 00:23:21.909722 | orchestrator | 2025-04-14 00:23:21.909857 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-04-14 00:23:21.909895 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-04-14 00:23:22.603437 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-04-14 00:23:22.603633 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-04-14 00:23:22.603656 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-04-14 00:23:22.603671 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-04-14 00:23:22.603686 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-04-14 00:23:22.603700 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-04-14 00:23:22.603714 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-04-14 00:23:22.603728 | orchestrator | 2025-04-14 00:23:22.603743 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-04-14 00:23:22.603774 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:22.706702 | orchestrator | 2025-04-14 00:23:22.706841 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-04-14 00:23:22.706906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-04-14 00:23:24.007636 | orchestrator | 2025-04-14 00:23:24.007763 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-04-14 00:23:24.007801 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-04-14 00:23:24.717254 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-04-14 00:23:24.717393 | orchestrator | 2025-04-14 00:23:24.717430 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-04-14 00:23:24.717507 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:24.789162 | orchestrator | 2025-04-14 00:23:24.789270 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-04-14 00:23:24.789303 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:23:24.869444 | orchestrator | 2025-04-14 00:23:24.869558 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-04-14 00:23:24.869637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-04-14 00:23:26.305826 | orchestrator | 2025-04-14 00:23:26.305945 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-04-14 00:23:26.305978 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:23:27.045268 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:23:27.045389 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:27.045412 | orchestrator | 2025-04-14 00:23:27.045428 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-04-14 00:23:27.045461 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:27.159925 | orchestrator | 2025-04-14 00:23:27.160044 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-04-14 00:23:27.160095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-04-14 00:23:27.829342 | orchestrator | 2025-04-14 00:23:27.829499 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-04-14 00:23:27.829553 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:23:28.516386 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:28.516537 | orchestrator | 2025-04-14 00:23:28.516639 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-04-14 00:23:28.516689 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:28.659142 | orchestrator | 2025-04-14 00:23:28.659259 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-04-14 00:23:28.659294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-04-14 00:23:30.221484 | orchestrator | 2025-04-14 00:23:30.221702 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-04-14 00:23:30.222473 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:30.753495 | orchestrator | 2025-04-14 00:23:30.753677 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-04-14 00:23:30.753718 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:32.140116 | orchestrator | 2025-04-14 00:23:32.140251 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-04-14 00:23:32.140315 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-04-14 00:23:32.851055 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-04-14 00:23:32.851194 | orchestrator | 2025-04-14 00:23:32.851226 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-04-14 00:23:32.851259 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:33.270178 | orchestrator | 2025-04-14 00:23:33.270298 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-04-14 00:23:33.270335 | orchestrator | ok: [testbed-manager] 2025-04-14 00:23:33.628451 | orchestrator | 2025-04-14 00:23:33.628555 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-04-14 00:23:33.628609 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:33.683169 | orchestrator | 2025-04-14 00:23:33.683279 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-04-14 00:23:33.683312 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:23:33.817518 | orchestrator | 2025-04-14 00:23:33.817670 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-04-14 00:23:33.817703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-04-14 00:23:33.867976 | orchestrator | 2025-04-14 00:23:33.868080 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-04-14 00:23:33.868110 | orchestrator | ok: [testbed-manager] 2025-04-14 00:23:36.018385 | orchestrator | 2025-04-14 00:23:36.018520 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-04-14 00:23:36.018558 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-04-14 00:23:36.728200 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-04-14 00:23:36.728322 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-04-14 00:23:36.728340 | orchestrator | 2025-04-14 00:23:36.728356 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-04-14 00:23:36.728387 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:37.492742 | orchestrator | 2025-04-14 00:23:37.492879 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-04-14 00:23:37.492917 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:37.575416 | orchestrator | 2025-04-14 00:23:37.575522 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-04-14 00:23:37.575552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-04-14 00:23:37.637236 | orchestrator | 2025-04-14 00:23:37.637342 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-04-14 00:23:37.637371 | orchestrator | ok: [testbed-manager] 2025-04-14 00:23:38.394190 | orchestrator | 2025-04-14 00:23:38.394301 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-04-14 00:23:38.394329 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-04-14 00:23:38.500169 | orchestrator | 2025-04-14 00:23:38.500290 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-04-14 00:23:38.500326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-04-14 00:23:39.270872 | orchestrator | 2025-04-14 00:23:39.271002 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-04-14 00:23:39.271043 | orchestrator | changed: [testbed-manager] 2025-04-14 00:23:40.022630 | orchestrator | 2025-04-14 00:23:40.022755 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-04-14 00:23:40.022793 | orchestrator | ok: [testbed-manager] 2025-04-14 00:23:40.085627 | orchestrator | 2025-04-14 00:23:40.085752 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-04-14 00:23:40.085788 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:23:40.152435 | orchestrator | 2025-04-14 00:23:40.152634 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-04-14 00:23:40.152675 | orchestrator | ok: [testbed-manager] 2025-04-14 00:23:41.079860 | orchestrator | 2025-04-14 00:23:41.080000 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-04-14 00:23:41.080842 | orchestrator | changed: [testbed-manager] 2025-04-14 00:24:25.944994 | orchestrator | 2025-04-14 00:24:25.945126 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-04-14 00:24:25.945163 | orchestrator | changed: [testbed-manager] 2025-04-14 00:24:26.646202 | orchestrator | 2025-04-14 00:24:26.646328 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-04-14 00:24:26.646366 | orchestrator | ok: [testbed-manager] 2025-04-14 00:24:29.430983 | orchestrator | 2025-04-14 00:24:29.431111 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-04-14 00:24:29.431149 | orchestrator | changed: [testbed-manager] 2025-04-14 00:24:29.499113 | orchestrator | 2025-04-14 00:24:29.499237 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-04-14 00:24:29.499286 | orchestrator | ok: [testbed-manager] 2025-04-14 00:24:29.563140 | orchestrator | 2025-04-14 00:24:29.563254 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-14 00:24:29.563272 | orchestrator | 2025-04-14 00:24:29.563287 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-04-14 00:24:29.563317 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:25:29.628685 | orchestrator | 2025-04-14 00:25:29.628840 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-04-14 00:25:29.628880 | orchestrator | Pausing for 60 seconds 2025-04-14 00:25:35.201425 | orchestrator | changed: [testbed-manager] 2025-04-14 00:25:35.201640 | orchestrator | 2025-04-14 00:25:35.201678 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-04-14 00:25:35.201722 | orchestrator | changed: [testbed-manager] 2025-04-14 00:26:17.051345 | orchestrator | 2025-04-14 00:26:17.051544 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-04-14 00:26:17.051588 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-04-14 00:26:22.989131 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-04-14 00:26:22.989285 | orchestrator | changed: [testbed-manager] 2025-04-14 00:26:22.989309 | orchestrator | 2025-04-14 00:26:22.989326 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-04-14 00:26:22.989371 | orchestrator | changed: [testbed-manager] 2025-04-14 00:26:23.079807 | orchestrator | 2025-04-14 00:26:23.079924 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-04-14 00:26:23.079959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-04-14 00:26:23.129000 | orchestrator | 2025-04-14 00:26:23.129113 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-04-14 00:26:23.129140 | orchestrator | 2025-04-14 00:26:23.129162 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-04-14 00:26:23.129191 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:26:23.224155 | orchestrator | 2025-04-14 00:26:23.224261 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:26:23.224279 | orchestrator | testbed-manager : ok=105 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-04-14 00:26:23.224294 | orchestrator | 2025-04-14 00:26:23.224323 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-04-14 00:26:23.233996 | orchestrator | + deactivate 2025-04-14 00:26:23.234133 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-04-14 00:26:23.234154 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-04-14 00:26:23.234168 | orchestrator | + export PATH 2025-04-14 00:26:23.234183 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-04-14 00:26:23.234198 | orchestrator | + '[' -n '' ']' 2025-04-14 00:26:23.234212 | orchestrator | + hash -r 2025-04-14 00:26:23.234226 | orchestrator | + '[' -n '' ']' 2025-04-14 00:26:23.234240 | orchestrator | + unset VIRTUAL_ENV 2025-04-14 00:26:23.234254 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-04-14 00:26:23.234268 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-04-14 00:26:23.234282 | orchestrator | + unset -f deactivate 2025-04-14 00:26:23.234329 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-04-14 00:26:23.234362 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-14 00:26:23.234950 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-14 00:26:23.234975 | orchestrator | + local max_attempts=60 2025-04-14 00:26:23.234990 | orchestrator | + local name=ceph-ansible 2025-04-14 00:26:23.235016 | orchestrator | + local attempt_num=1 2025-04-14 00:26:23.235036 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-14 00:26:23.267856 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-14 00:26:23.268676 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-14 00:26:23.268712 | orchestrator | + local max_attempts=60 2025-04-14 00:26:23.268737 | orchestrator | + local name=kolla-ansible 2025-04-14 00:26:23.268764 | orchestrator | + local attempt_num=1 2025-04-14 00:26:23.268797 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-14 00:26:23.297928 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-14 00:26:23.298260 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-14 00:26:23.298373 | orchestrator | + local max_attempts=60 2025-04-14 00:26:23.298396 | orchestrator | + local name=osism-ansible 2025-04-14 00:26:23.298413 | orchestrator | + local attempt_num=1 2025-04-14 00:26:23.298445 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-14 00:26:23.339931 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-14 00:26:24.050733 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-14 00:26:24.050855 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-14 00:26:24.050892 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-14 00:26:24.112139 | orchestrator | + [[ -1 -ge 0 ]] 2025-04-14 00:26:24.367917 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-04-14 00:26:24.368033 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-04-14 00:26:24.368070 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-14 00:26:24.373841 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.373897 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.373913 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-04-14 00:26:24.373951 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-04-14 00:26:24.373966 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.373984 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.373998 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.374012 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 49 seconds (healthy) 2025-04-14 00:26:24.374078 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.374093 | orchestrator | manager-mariadb-1 index.docker.io/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-04-14 00:26:24.374133 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.374147 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.374161 | orchestrator | manager-redis-1 index.docker.io/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-04-14 00:26:24.374175 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.374189 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.374203 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.374217 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-04-14 00:26:24.374242 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-04-14 00:26:24.535739 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-04-14 00:26:24.542829 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 9 minutes ago Up 8 minutes (healthy) 2025-04-14 00:26:24.542927 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 9 minutes ago Up 3 minutes (healthy) 2025-04-14 00:26:24.542946 | orchestrator | netbox-postgres-1 index.docker.io/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 9 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-04-14 00:26:24.542962 | orchestrator | netbox-redis-1 index.docker.io/library/redis:7.4.2-alpine "docker-entrypoint.s…" redis 9 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-04-14 00:26:24.542992 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-14 00:26:24.600882 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-14 00:26:24.607229 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-04-14 00:26:24.607324 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-04-14 00:26:26.284048 | orchestrator | 2025-04-14 00:26:26 | INFO  | Task fefc6dc9-83e5-42fc-8b64-7f251549a6f1 (resolvconf) was prepared for execution. 2025-04-14 00:26:29.449692 | orchestrator | 2025-04-14 00:26:26 | INFO  | It takes a moment until task fefc6dc9-83e5-42fc-8b64-7f251549a6f1 (resolvconf) has been started and output is visible here. 2025-04-14 00:26:29.449862 | orchestrator | 2025-04-14 00:26:29.450544 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-04-14 00:26:29.453082 | orchestrator | 2025-04-14 00:26:29.453693 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:26:29.453724 | orchestrator | Monday 14 April 2025 00:26:29 +0000 (0:00:00.094) 0:00:00.094 ********** 2025-04-14 00:26:34.788532 | orchestrator | ok: [testbed-manager] 2025-04-14 00:26:34.788825 | orchestrator | 2025-04-14 00:26:34.790062 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-14 00:26:34.790807 | orchestrator | Monday 14 April 2025 00:26:34 +0000 (0:00:05.342) 0:00:05.436 ********** 2025-04-14 00:26:34.839951 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:26:34.840750 | orchestrator | 2025-04-14 00:26:34.842181 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-14 00:26:34.843623 | orchestrator | Monday 14 April 2025 00:26:34 +0000 (0:00:00.051) 0:00:05.487 ********** 2025-04-14 00:26:34.928754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-04-14 00:26:34.932807 | orchestrator | 2025-04-14 00:26:34.932922 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-14 00:26:35.020030 | orchestrator | Monday 14 April 2025 00:26:34 +0000 (0:00:00.088) 0:00:05.575 ********** 2025-04-14 00:26:35.020192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-04-14 00:26:35.020432 | orchestrator | 2025-04-14 00:26:35.024414 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-14 00:26:35.024733 | orchestrator | Monday 14 April 2025 00:26:35 +0000 (0:00:00.091) 0:00:05.667 ********** 2025-04-14 00:26:36.230125 | orchestrator | ok: [testbed-manager] 2025-04-14 00:26:36.231301 | orchestrator | 2025-04-14 00:26:36.232933 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-14 00:26:36.234244 | orchestrator | Monday 14 April 2025 00:26:36 +0000 (0:00:01.208) 0:00:06.875 ********** 2025-04-14 00:26:36.293592 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:26:36.294934 | orchestrator | 2025-04-14 00:26:36.295416 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-14 00:26:36.296364 | orchestrator | Monday 14 April 2025 00:26:36 +0000 (0:00:00.064) 0:00:06.940 ********** 2025-04-14 00:26:36.816463 | orchestrator | ok: [testbed-manager] 2025-04-14 00:26:36.817327 | orchestrator | 2025-04-14 00:26:36.817520 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-14 00:26:36.818258 | orchestrator | Monday 14 April 2025 00:26:36 +0000 (0:00:00.523) 0:00:07.463 ********** 2025-04-14 00:26:36.911233 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:26:36.911972 | orchestrator | 2025-04-14 00:26:36.913800 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-14 00:26:36.914281 | orchestrator | Monday 14 April 2025 00:26:36 +0000 (0:00:00.093) 0:00:07.557 ********** 2025-04-14 00:26:37.498232 | orchestrator | changed: [testbed-manager] 2025-04-14 00:26:37.498385 | orchestrator | 2025-04-14 00:26:37.499216 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-14 00:26:37.500163 | orchestrator | Monday 14 April 2025 00:26:37 +0000 (0:00:00.587) 0:00:08.145 ********** 2025-04-14 00:26:38.677776 | orchestrator | changed: [testbed-manager] 2025-04-14 00:26:38.678293 | orchestrator | 2025-04-14 00:26:38.679466 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-14 00:26:38.680386 | orchestrator | Monday 14 April 2025 00:26:38 +0000 (0:00:01.177) 0:00:09.323 ********** 2025-04-14 00:26:39.676297 | orchestrator | ok: [testbed-manager] 2025-04-14 00:26:39.676840 | orchestrator | 2025-04-14 00:26:39.677622 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-14 00:26:39.679700 | orchestrator | Monday 14 April 2025 00:26:39 +0000 (0:00:00.998) 0:00:10.321 ********** 2025-04-14 00:26:39.764232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-04-14 00:26:39.765362 | orchestrator | 2025-04-14 00:26:39.765413 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-14 00:26:39.766331 | orchestrator | Monday 14 April 2025 00:26:39 +0000 (0:00:00.089) 0:00:10.411 ********** 2025-04-14 00:26:40.988381 | orchestrator | changed: [testbed-manager] 2025-04-14 00:26:40.988627 | orchestrator | 2025-04-14 00:26:40.989707 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:26:40.990007 | orchestrator | 2025-04-14 00:26:40 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:26:40.992300 | orchestrator | 2025-04-14 00:26:40 | INFO  | Please wait and do not abort execution. 2025-04-14 00:26:40.992360 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:26:40.993735 | orchestrator | 2025-04-14 00:26:40.994347 | orchestrator | Monday 14 April 2025 00:26:40 +0000 (0:00:01.224) 0:00:11.635 ********** 2025-04-14 00:26:40.995061 | orchestrator | =============================================================================== 2025-04-14 00:26:40.995677 | orchestrator | Gathering Facts --------------------------------------------------------- 5.34s 2025-04-14 00:26:40.996429 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.22s 2025-04-14 00:26:40.997164 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.21s 2025-04-14 00:26:40.997399 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.18s 2025-04-14 00:26:40.998085 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2025-04-14 00:26:40.998552 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2025-04-14 00:26:40.998962 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.52s 2025-04-14 00:26:40.999520 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-04-14 00:26:41.000211 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-04-14 00:26:41.000856 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-04-14 00:26:41.001085 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-04-14 00:26:41.001856 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-04-14 00:26:41.002380 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-04-14 00:26:41.431081 | orchestrator | + osism apply sshconfig 2025-04-14 00:26:42.942780 | orchestrator | 2025-04-14 00:26:42 | INFO  | Task 4a38aca2-ec48-4268-a8af-9590d50cd73b (sshconfig) was prepared for execution. 2025-04-14 00:26:46.147351 | orchestrator | 2025-04-14 00:26:42 | INFO  | It takes a moment until task 4a38aca2-ec48-4268-a8af-9590d50cd73b (sshconfig) has been started and output is visible here. 2025-04-14 00:26:46.147607 | orchestrator | 2025-04-14 00:26:46.148864 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-04-14 00:26:46.148897 | orchestrator | 2025-04-14 00:26:46.149612 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-04-14 00:26:46.150346 | orchestrator | Monday 14 April 2025 00:26:46 +0000 (0:00:00.109) 0:00:00.109 ********** 2025-04-14 00:26:46.722009 | orchestrator | ok: [testbed-manager] 2025-04-14 00:26:46.723657 | orchestrator | 2025-04-14 00:26:46.723714 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-04-14 00:26:46.723740 | orchestrator | Monday 14 April 2025 00:26:46 +0000 (0:00:00.577) 0:00:00.686 ********** 2025-04-14 00:26:47.269010 | orchestrator | changed: [testbed-manager] 2025-04-14 00:26:47.269300 | orchestrator | 2025-04-14 00:26:47.269341 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-04-14 00:26:47.270581 | orchestrator | Monday 14 April 2025 00:26:47 +0000 (0:00:00.546) 0:00:01.233 ********** 2025-04-14 00:26:53.362885 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-04-14 00:26:53.364171 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-04-14 00:26:53.364949 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-04-14 00:26:53.365066 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-04-14 00:26:53.365845 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-14 00:26:53.368596 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-04-14 00:26:53.369417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-04-14 00:26:53.370979 | orchestrator | 2025-04-14 00:26:53.372591 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-04-14 00:26:53.372670 | orchestrator | Monday 14 April 2025 00:26:53 +0000 (0:00:06.091) 0:00:07.325 ********** 2025-04-14 00:26:53.438889 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:26:53.439409 | orchestrator | 2025-04-14 00:26:53.440659 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-04-14 00:26:53.441579 | orchestrator | Monday 14 April 2025 00:26:53 +0000 (0:00:00.078) 0:00:07.403 ********** 2025-04-14 00:26:54.066972 | orchestrator | changed: [testbed-manager] 2025-04-14 00:26:54.067611 | orchestrator | 2025-04-14 00:26:54.068850 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:26:54.069886 | orchestrator | 2025-04-14 00:26:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:26:54.070304 | orchestrator | 2025-04-14 00:26:54 | INFO  | Please wait and do not abort execution. 2025-04-14 00:26:54.071003 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:26:54.071862 | orchestrator | 2025-04-14 00:26:54.072981 | orchestrator | Monday 14 April 2025 00:26:54 +0000 (0:00:00.628) 0:00:08.032 ********** 2025-04-14 00:26:54.074553 | orchestrator | =============================================================================== 2025-04-14 00:26:54.075350 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.09s 2025-04-14 00:26:54.075768 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2025-04-14 00:26:54.076692 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-04-14 00:26:54.079141 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2025-04-14 00:26:54.080045 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-04-14 00:26:54.495496 | orchestrator | + osism apply known-hosts 2025-04-14 00:26:55.980419 | orchestrator | 2025-04-14 00:26:55 | INFO  | Task 147a7f48-83c3-4e37-b52d-827c1506af86 (known-hosts) was prepared for execution. 2025-04-14 00:26:59.232539 | orchestrator | 2025-04-14 00:26:55 | INFO  | It takes a moment until task 147a7f48-83c3-4e37-b52d-827c1506af86 (known-hosts) has been started and output is visible here. 2025-04-14 00:26:59.232691 | orchestrator | 2025-04-14 00:26:59.233375 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-04-14 00:26:59.234596 | orchestrator | 2025-04-14 00:26:59.236959 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-04-14 00:26:59.237676 | orchestrator | Monday 14 April 2025 00:26:59 +0000 (0:00:00.112) 0:00:00.112 ********** 2025-04-14 00:27:05.409825 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-14 00:27:05.410311 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-14 00:27:05.410349 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-14 00:27:05.410364 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-14 00:27:05.410396 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-14 00:27:05.410410 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-14 00:27:05.410433 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-14 00:27:05.410851 | orchestrator | 2025-04-14 00:27:05.411253 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-04-14 00:27:05.412642 | orchestrator | Monday 14 April 2025 00:27:05 +0000 (0:00:06.178) 0:00:06.291 ********** 2025-04-14 00:27:05.568131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-14 00:27:05.570300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-14 00:27:05.570347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-14 00:27:05.570973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-14 00:27:05.571593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-14 00:27:05.572732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-14 00:27:05.573130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-14 00:27:05.573161 | orchestrator | 2025-04-14 00:27:05.573177 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:05.573197 | orchestrator | Monday 14 April 2025 00:27:05 +0000 (0:00:00.160) 0:00:06.452 ********** 2025-04-14 00:27:06.804756 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQxXFDHe6D2Y/4FbEL7pQlC8SHOaNNHvPVt/Qv4XlGKyUuBrxgcuTFh1pe2AIo0EqWXlHyZ0U6GdYkITczOIdD/rlN8Rvm9oize+cTmUY8eHbh1zGviizCwOWe2G9HPrRr4vyWNgwTLqAkcywIV7Qqn5sWycoA7hXCLKiX4X8Wwbnlxj3kkrCs2FtKRk/aXSms59++1IQHOpCC63Ta0cm3ikZEzx1DewTzDBprDeR+JwkF4HaWfippPNoKX0i0pqNsrkAFICrpV7grilpD9y4aB88k/JzYEF7rvPUZiCfXPO6y8gbAC7YakrExdpgtgjpt4xFRNFDD3ZPQlGiy34pbpqSoiPgLUzdHbJVmVZzfG8SSYK69HWpWCNHZ+fgIfQRiVq2/dzsdX2o8odZTWYNtBGSdVO+hQ3B5POLWBofR9nMHK755IaCEYSXlaC2vmPUTPVJOxZVI1ShJB+LkNJe03qMA2ONG91rlY5miAJeOGdbY22JgKG/9JX+SKfPT89E=) 2025-04-14 00:27:06.805987 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKB/TODZB3/pDDCibKIIY/qI5VMm94DqwiK5/+lDzTgr1Z8wdzN+s5WwTK8cOPdxIWqP3uJN49Wuove+OwYvboU=) 2025-04-14 00:27:06.807298 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORztv3Msk8rtIicyxVyir2qFA+ZruvH2RVXmSTIAGDT) 2025-04-14 00:27:06.807664 | orchestrator | 2025-04-14 00:27:06.808103 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:06.808843 | orchestrator | Monday 14 April 2025 00:27:06 +0000 (0:00:01.233) 0:00:07.685 ********** 2025-04-14 00:27:07.996910 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGLISnrxfez0BvGS6scWW9ygq9K3qp1wFEbgJ7pTuIcK) 2025-04-14 00:27:07.998978 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9FRxJeNlwwCiZqcuW4i16fan3XLoAT3uT3EEIoXSJ3yl14NVVHhv53aVHtRFO0+5nbcIVOh0tz5F+qWFYAinKrfVi1MP5rWAXlBat7KkJaQjO+4HjCZZ0/7iPQGmdUqV783J+wpUxdr9EGBLVnpg0GXJMOZiJCR7Pl5I8v9KOB1hlFBXbuyOnDOxXCcENKVsYFeAv6kkIcrBmUIOg05tmx+ltRRcavm3PI+uFePDJlGCU0JAjvdUt12+eJ/R5F2f1dUNDeBiD8lBDmIpSbuvm6hCZrBnZwJAnYrrBUx/k+P9cXQEz/DdEsPWWXiHrd97qad5qOXCKLRh76cCm/c8W1NdfVytd0JHeudOyNWpW5WX4yh5sd8PZodFS7R6YsintM5v2A5XyG8+oJxyRhXAe4JNdC7gMgfq9jgWzHq0HNmHmiuFSHxW+Knm3QkMCg/+29uN12xL9O1Nj+Xlrrc1rE9L4M3NmBwBqPkIyakm/8toc4in9zNjMt9CkusKfmU8=) 2025-04-14 00:27:08.001454 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA2UoVsV9BoSxBZ4773IzkkDjs/BoDF1OthBMBu1HBziGlkyIuS5I9AKi+E+USQwMjwa9v67wGVqECP9bsVylnI=) 2025-04-14 00:27:08.002873 | orchestrator | 2025-04-14 00:27:08.003111 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:08.003803 | orchestrator | Monday 14 April 2025 00:27:07 +0000 (0:00:01.193) 0:00:08.878 ********** 2025-04-14 00:27:09.148767 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqHftyL/j6hgQKlzMZB+PldZLIVMScmeWT0OuzZypm/kZv3FENwCL1vR/WOs6CM35i3O7XhGMa5J1hKBdxMmceFwwYJq7dfR6YWClaXC7fKkpjd67X75WRueuuB69c4GLsZmPmSEzh0pLLR6O6XSFIFvpH2CTouBvETQ3YSnSk9YQ94a3fVnn1x5bO+lLsDZt21RAwd+NKfaYJWYY8jgF3LM95g8FIdWCODe3NstKlBIh9z9yb97BLW7SzdJZK7ybeoQz8EPUSVsw8rLpy/ZgVPB5f9EBRq3HRo47YR6a3Nta3HfT8VBR/M8H6vPFa2alayqmDOR77TL9UWKTil5t4UQumzDW390OY+p46z89pUX5P65KYahQ+AdGNmoCTGVWp1KfCYX26FAjv+YFIhGqKEwtuhzJHvBMKvM5IVEBrhCE7+8gJA+uy2jJ5VdsMHWkLTZVVpKw/kuwzSiUjOdNILH91ayMuezr9TB5Ld6nChtDjXMR7KFnW6zQrlew1xdU=) 2025-04-14 00:27:09.149731 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKfniEsdH9VtaZi1l4Zc/QNLWg2mo9y/sPIH7wOCLhCPYcCOleMzY9NIwvh+vkgJmfuhPfvG0L+nPx7B0XAvrQg=) 2025-04-14 00:27:09.149778 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVVulV1aqhg+5EUv4I0DrVvq8IBg41rwOJm6zF82/Jt) 2025-04-14 00:27:09.150924 | orchestrator | 2025-04-14 00:27:09.151972 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:09.152829 | orchestrator | Monday 14 April 2025 00:27:09 +0000 (0:00:01.150) 0:00:10.029 ********** 2025-04-14 00:27:10.239598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPfYbR8wZzApdMWzsJzdIQp6R1gtV3YRiDyLVFwLXuAQzBblI++hbSXy3VEnGyBOmtx+mip03nZDP7uOf8+JbcIdkcfdsWP/BqekTtnM7pev00LFNMHifmRouyZrmNctm5gxJcU/XIZiXPNShtw/QnBiP9lP/BQaM9QCeUuSP8YaUNjlbLVBTKHJ1bO/oujkDwROKv5vQZJvZrbgQuy2dt7cs3XyMQbjPU+WP9ra1FZpntKLB1yzhQXbhIKAOF4x1SrtqNbET1DUMVEMDHOLnPvtJMsO+hWF3IGcVReiqxJ3aqosUFaaN0L0se2u7pnZYQio9PV9q4wXp6flCv91b9K5wQ77X0jLEJmB2bCYuBR4qjnTyCT2u6FzlrP9dhzgn6N4CvMds9VrbIBKY0hUmlZFzSio0HGoGHAxMyry61ER3k/ZhGT4A/6g+AJurjR45I8r8ug1x05JeKEdTgS5ToCKl9RplJY+VMFKYw3U7NbfU19iHZrpvm7l46SLp4+f8=) 2025-04-14 00:27:10.240060 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGNO3ssyg5ZIDaGW/fDSJ4eye1rmzEDJ69UvYExjnszMAxatUMozQ3Wz2JQHZKRFq5e2M4cDDzOrBNLdLhHFsro=) 2025-04-14 00:27:10.240175 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILbdcFlSsPGuGQlxeCXJ4Z4CLF3aNtC//bBErw2KlQoj) 2025-04-14 00:27:10.241202 | orchestrator | 2025-04-14 00:27:10.241946 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:10.242640 | orchestrator | Monday 14 April 2025 00:27:10 +0000 (0:00:01.091) 0:00:11.121 ********** 2025-04-14 00:27:11.363413 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGWC8aezWqPxyiPe/Uc7Ce/OaN54/Ca6NsCPF7to2Q3t7bk2IID8EVCa7Q37xVBQI3sFqHVGBO+1Ou7CUAvXWDdE3Xh2eiaowmEfjRJS+Zv7gx9aoALG9yyADgG+SP3wYS0vOo8kb94l4XsW1yLGqaSpeXaeSlYVp7MvTWTlailTmpFcAM7pyPVVCmpYx1Cc2CbxcVRPce+yyFvCGLwviQRzB8fHa/8z663FZ3AOqz4lChK+U706mkbAzBjbrPZmQyU5l8xyhfpx2rfqoEELIwqC+/T6+y/9RueY14NCPl8JsaRuWSAA2pkHU11iiB5fdshzTDy1b819iRJ/T/ScklRczDJ80bP/Pc9erlFvHsI7m0GmtE/990ECUTaFDNq5WRWCjISFd92nA2w2MZtCISZwFj0TY6OWvnRugHboSoyJ6azAt1Q/RRgwm2JzKHQEHROubHNaNQYYWtkjmI8UUlwdr/EzzGXzWnx16KIJmv8uKunQlE1gTPMgQC0SxFSZU=) 2025-04-14 00:27:11.364030 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH7Wf35AXkuQOB2gg1j/JAHiep6xWnIl3daHLnvtmqZ3xeH/ciNNSpPltiAiGiaeFv+8It7My4+VKXq7FdF1qGM=) 2025-04-14 00:27:11.364769 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIODYrC9tUQpM6J85pLZ5x+lSLkjFQ9xP8FRjcap+RZbi) 2025-04-14 00:27:11.365559 | orchestrator | 2025-04-14 00:27:11.366167 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:11.366902 | orchestrator | Monday 14 April 2025 00:27:11 +0000 (0:00:01.122) 0:00:12.243 ********** 2025-04-14 00:27:12.515530 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO5eDGn/IHdglf38FlIBjDgQIS3M8m5yUS58g4zo+f7n) 2025-04-14 00:27:12.516148 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH4iIs6zboQGK0Bb/hhUVsgKxKn6puNOTQkPaIFXzMjZM69n1TgQFOPrz42ThL+YK4pMgLGM1mj05WMBgEs87/34WHrJ8P0aFypxRkm0iw7JuewcpGpaqfNCRYcgm8U+wUM96m+c9le2CcsV9LO+9SW2TjV4ezIUtspX3bQnYSb7fObFFjl8QnkGva2X0iJ8n0WmdZbn/v9T4j1558qi/jtRdNJGUKuSkPzK1IYqQS3vsetOksGlS0bVo8VSkGRjLyNCjwXMjdYOaN4eqKOm32ZETf3Tlv0iAXgr7tb7/tLnKRnf4kXAHVOAohz+hiMUL+a6IP9i73qqm2KpV/DQ0XTQoWCKjAaySMFuhxy+a+E87mPu1xEoWqLHxaqjQaHE1CxMJcUd5iaAnAJTNxSEKzIa5F98I1bX/ZRZWPM1BJAocBE0hg0xtp8SRiI933OjT9r7j2QXAIEryESe4OvDppth7ELj/Qpjp1r9Pto4lJRORHtvLXrngRza+iQ2SmFPU=) 2025-04-14 00:27:12.516197 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOHc60YPAEU5v03feNAcBYNrfLcRKoWluR93nWE47PqtsBxBOF2f0/4AK4zHTMdWUlLjK4pj3ODrhdHMcBHz1Mw=) 2025-04-14 00:27:12.516687 | orchestrator | 2025-04-14 00:27:12.517476 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:12.517834 | orchestrator | Monday 14 April 2025 00:27:12 +0000 (0:00:01.151) 0:00:13.395 ********** 2025-04-14 00:27:13.754961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN+kV9f031zbgkrARwXqrr4F+1eRNMei/EVM4KCp9ZjaUa/kdOf1akVQ53Cjppo9g0wjsAOkKUEC5o1VoSvKV0Y=) 2025-04-14 00:27:13.755954 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICbCglWSgSUAmYlGKfAghyv7WAFizq0MMX785h9MItVV) 2025-04-14 00:27:13.756023 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+lPF82Mfw+oEcGlDHh6sVq9uaaywXMYtbtVxWxKge/nkoSoTVvkWa1JaXFftytbZFxkxrLGNSKjLX7Xd1RoW1dWXpgr6H4TCYXJ/LrGpzxVDZXxN4G7iY9DyaGCg/5XL9dNtp1gm3VRdxSTCQXMK7HBXT3dHbSyEYAWZyt68tvvyhUkM+wrQK+vo9CFBWsMo3X85evVcL4ezijDVOUL/ef8aN6ozqryi7goMNfhE3xSiR/nfXxhFh7zABLvNvQY2HiPded4o33mUB0RYqsU8gRSg8qJ52b111INhWlYoj6oHltw6tyNKrz6nucDN64oYJ/My+pZu+G0ucCq9W3kkahNyUyZpmBttETP9y85tQhNpjIkuXym4F3QBJa+QB7iJOJRCwygfHwHURrhjlKG6dRALLY9tkoLldXM83SvTJIuHDk/Zw/aKd0yzK1i/2HCLbrn0PqyLZRs3rmWYQHbQeXfSt/U3QpZj/dpDnVkKTZlfg80QhPPaG062DXsVxqUU=) 2025-04-14 00:27:13.757499 | orchestrator | 2025-04-14 00:27:13.758517 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-04-14 00:27:13.759613 | orchestrator | Monday 14 April 2025 00:27:13 +0000 (0:00:01.239) 0:00:14.635 ********** 2025-04-14 00:27:19.258707 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-04-14 00:27:19.261160 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-04-14 00:27:19.261615 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-04-14 00:27:19.261663 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-04-14 00:27:19.263245 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-04-14 00:27:19.264569 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-04-14 00:27:19.265502 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-04-14 00:27:19.266113 | orchestrator | 2025-04-14 00:27:19.266496 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-04-14 00:27:19.267229 | orchestrator | Monday 14 April 2025 00:27:19 +0000 (0:00:05.503) 0:00:20.138 ********** 2025-04-14 00:27:19.445029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-04-14 00:27:19.445724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-04-14 00:27:19.446703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-04-14 00:27:19.448243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-04-14 00:27:19.448964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-04-14 00:27:19.449679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-04-14 00:27:19.450526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-04-14 00:27:19.451013 | orchestrator | 2025-04-14 00:27:19.451666 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:19.452188 | orchestrator | Monday 14 April 2025 00:27:19 +0000 (0:00:00.188) 0:00:20.326 ********** 2025-04-14 00:27:20.595398 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQxXFDHe6D2Y/4FbEL7pQlC8SHOaNNHvPVt/Qv4XlGKyUuBrxgcuTFh1pe2AIo0EqWXlHyZ0U6GdYkITczOIdD/rlN8Rvm9oize+cTmUY8eHbh1zGviizCwOWe2G9HPrRr4vyWNgwTLqAkcywIV7Qqn5sWycoA7hXCLKiX4X8Wwbnlxj3kkrCs2FtKRk/aXSms59++1IQHOpCC63Ta0cm3ikZEzx1DewTzDBprDeR+JwkF4HaWfippPNoKX0i0pqNsrkAFICrpV7grilpD9y4aB88k/JzYEF7rvPUZiCfXPO6y8gbAC7YakrExdpgtgjpt4xFRNFDD3ZPQlGiy34pbpqSoiPgLUzdHbJVmVZzfG8SSYK69HWpWCNHZ+fgIfQRiVq2/dzsdX2o8odZTWYNtBGSdVO+hQ3B5POLWBofR9nMHK755IaCEYSXlaC2vmPUTPVJOxZVI1ShJB+LkNJe03qMA2ONG91rlY5miAJeOGdbY22JgKG/9JX+SKfPT89E=) 2025-04-14 00:27:20.596413 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIORztv3Msk8rtIicyxVyir2qFA+ZruvH2RVXmSTIAGDT) 2025-04-14 00:27:20.597529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKB/TODZB3/pDDCibKIIY/qI5VMm94DqwiK5/+lDzTgr1Z8wdzN+s5WwTK8cOPdxIWqP3uJN49Wuove+OwYvboU=) 2025-04-14 00:27:20.598509 | orchestrator | 2025-04-14 00:27:20.599025 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:20.600038 | orchestrator | Monday 14 April 2025 00:27:20 +0000 (0:00:01.149) 0:00:21.476 ********** 2025-04-14 00:27:21.823521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA2UoVsV9BoSxBZ4773IzkkDjs/BoDF1OthBMBu1HBziGlkyIuS5I9AKi+E+USQwMjwa9v67wGVqECP9bsVylnI=) 2025-04-14 00:27:21.824147 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9FRxJeNlwwCiZqcuW4i16fan3XLoAT3uT3EEIoXSJ3yl14NVVHhv53aVHtRFO0+5nbcIVOh0tz5F+qWFYAinKrfVi1MP5rWAXlBat7KkJaQjO+4HjCZZ0/7iPQGmdUqV783J+wpUxdr9EGBLVnpg0GXJMOZiJCR7Pl5I8v9KOB1hlFBXbuyOnDOxXCcENKVsYFeAv6kkIcrBmUIOg05tmx+ltRRcavm3PI+uFePDJlGCU0JAjvdUt12+eJ/R5F2f1dUNDeBiD8lBDmIpSbuvm6hCZrBnZwJAnYrrBUx/k+P9cXQEz/DdEsPWWXiHrd97qad5qOXCKLRh76cCm/c8W1NdfVytd0JHeudOyNWpW5WX4yh5sd8PZodFS7R6YsintM5v2A5XyG8+oJxyRhXAe4JNdC7gMgfq9jgWzHq0HNmHmiuFSHxW+Knm3QkMCg/+29uN12xL9O1Nj+Xlrrc1rE9L4M3NmBwBqPkIyakm/8toc4in9zNjMt9CkusKfmU8=) 2025-04-14 00:27:21.824568 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGLISnrxfez0BvGS6scWW9ygq9K3qp1wFEbgJ7pTuIcK) 2025-04-14 00:27:21.824917 | orchestrator | 2025-04-14 00:27:21.825588 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:21.825861 | orchestrator | Monday 14 April 2025 00:27:21 +0000 (0:00:01.228) 0:00:22.705 ********** 2025-04-14 00:27:22.943418 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqHftyL/j6hgQKlzMZB+PldZLIVMScmeWT0OuzZypm/kZv3FENwCL1vR/WOs6CM35i3O7XhGMa5J1hKBdxMmceFwwYJq7dfR6YWClaXC7fKkpjd67X75WRueuuB69c4GLsZmPmSEzh0pLLR6O6XSFIFvpH2CTouBvETQ3YSnSk9YQ94a3fVnn1x5bO+lLsDZt21RAwd+NKfaYJWYY8jgF3LM95g8FIdWCODe3NstKlBIh9z9yb97BLW7SzdJZK7ybeoQz8EPUSVsw8rLpy/ZgVPB5f9EBRq3HRo47YR6a3Nta3HfT8VBR/M8H6vPFa2alayqmDOR77TL9UWKTil5t4UQumzDW390OY+p46z89pUX5P65KYahQ+AdGNmoCTGVWp1KfCYX26FAjv+YFIhGqKEwtuhzJHvBMKvM5IVEBrhCE7+8gJA+uy2jJ5VdsMHWkLTZVVpKw/kuwzSiUjOdNILH91ayMuezr9TB5Ld6nChtDjXMR7KFnW6zQrlew1xdU=) 2025-04-14 00:27:22.944603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKfniEsdH9VtaZi1l4Zc/QNLWg2mo9y/sPIH7wOCLhCPYcCOleMzY9NIwvh+vkgJmfuhPfvG0L+nPx7B0XAvrQg=) 2025-04-14 00:27:22.945110 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVVulV1aqhg+5EUv4I0DrVvq8IBg41rwOJm6zF82/Jt) 2025-04-14 00:27:22.945662 | orchestrator | 2025-04-14 00:27:22.947411 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:22.948774 | orchestrator | Monday 14 April 2025 00:27:22 +0000 (0:00:01.120) 0:00:23.825 ********** 2025-04-14 00:27:24.061503 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCPfYbR8wZzApdMWzsJzdIQp6R1gtV3YRiDyLVFwLXuAQzBblI++hbSXy3VEnGyBOmtx+mip03nZDP7uOf8+JbcIdkcfdsWP/BqekTtnM7pev00LFNMHifmRouyZrmNctm5gxJcU/XIZiXPNShtw/QnBiP9lP/BQaM9QCeUuSP8YaUNjlbLVBTKHJ1bO/oujkDwROKv5vQZJvZrbgQuy2dt7cs3XyMQbjPU+WP9ra1FZpntKLB1yzhQXbhIKAOF4x1SrtqNbET1DUMVEMDHOLnPvtJMsO+hWF3IGcVReiqxJ3aqosUFaaN0L0se2u7pnZYQio9PV9q4wXp6flCv91b9K5wQ77X0jLEJmB2bCYuBR4qjnTyCT2u6FzlrP9dhzgn6N4CvMds9VrbIBKY0hUmlZFzSio0HGoGHAxMyry61ER3k/ZhGT4A/6g+AJurjR45I8r8ug1x05JeKEdTgS5ToCKl9RplJY+VMFKYw3U7NbfU19iHZrpvm7l46SLp4+f8=) 2025-04-14 00:27:24.061711 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGNO3ssyg5ZIDaGW/fDSJ4eye1rmzEDJ69UvYExjnszMAxatUMozQ3Wz2JQHZKRFq5e2M4cDDzOrBNLdLhHFsro=) 2025-04-14 00:27:24.061751 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILbdcFlSsPGuGQlxeCXJ4Z4CLF3aNtC//bBErw2KlQoj) 2025-04-14 00:27:24.061769 | orchestrator | 2025-04-14 00:27:24.061784 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:24.061806 | orchestrator | Monday 14 April 2025 00:27:24 +0000 (0:00:01.117) 0:00:24.943 ********** 2025-04-14 00:27:25.152356 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGWC8aezWqPxyiPe/Uc7Ce/OaN54/Ca6NsCPF7to2Q3t7bk2IID8EVCa7Q37xVBQI3sFqHVGBO+1Ou7CUAvXWDdE3Xh2eiaowmEfjRJS+Zv7gx9aoALG9yyADgG+SP3wYS0vOo8kb94l4XsW1yLGqaSpeXaeSlYVp7MvTWTlailTmpFcAM7pyPVVCmpYx1Cc2CbxcVRPce+yyFvCGLwviQRzB8fHa/8z663FZ3AOqz4lChK+U706mkbAzBjbrPZmQyU5l8xyhfpx2rfqoEELIwqC+/T6+y/9RueY14NCPl8JsaRuWSAA2pkHU11iiB5fdshzTDy1b819iRJ/T/ScklRczDJ80bP/Pc9erlFvHsI7m0GmtE/990ECUTaFDNq5WRWCjISFd92nA2w2MZtCISZwFj0TY6OWvnRugHboSoyJ6azAt1Q/RRgwm2JzKHQEHROubHNaNQYYWtkjmI8UUlwdr/EzzGXzWnx16KIJmv8uKunQlE1gTPMgQC0SxFSZU=) 2025-04-14 00:27:25.152619 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH7Wf35AXkuQOB2gg1j/JAHiep6xWnIl3daHLnvtmqZ3xeH/ciNNSpPltiAiGiaeFv+8It7My4+VKXq7FdF1qGM=) 2025-04-14 00:27:25.153827 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIODYrC9tUQpM6J85pLZ5x+lSLkjFQ9xP8FRjcap+RZbi) 2025-04-14 00:27:25.154797 | orchestrator | 2025-04-14 00:27:25.155523 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:25.155976 | orchestrator | Monday 14 April 2025 00:27:25 +0000 (0:00:01.090) 0:00:26.033 ********** 2025-04-14 00:27:26.306883 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO5eDGn/IHdglf38FlIBjDgQIS3M8m5yUS58g4zo+f7n) 2025-04-14 00:27:26.307693 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH4iIs6zboQGK0Bb/hhUVsgKxKn6puNOTQkPaIFXzMjZM69n1TgQFOPrz42ThL+YK4pMgLGM1mj05WMBgEs87/34WHrJ8P0aFypxRkm0iw7JuewcpGpaqfNCRYcgm8U+wUM96m+c9le2CcsV9LO+9SW2TjV4ezIUtspX3bQnYSb7fObFFjl8QnkGva2X0iJ8n0WmdZbn/v9T4j1558qi/jtRdNJGUKuSkPzK1IYqQS3vsetOksGlS0bVo8VSkGRjLyNCjwXMjdYOaN4eqKOm32ZETf3Tlv0iAXgr7tb7/tLnKRnf4kXAHVOAohz+hiMUL+a6IP9i73qqm2KpV/DQ0XTQoWCKjAaySMFuhxy+a+E87mPu1xEoWqLHxaqjQaHE1CxMJcUd5iaAnAJTNxSEKzIa5F98I1bX/ZRZWPM1BJAocBE0hg0xtp8SRiI933OjT9r7j2QXAIEryESe4OvDppth7ELj/Qpjp1r9Pto4lJRORHtvLXrngRza+iQ2SmFPU=) 2025-04-14 00:27:26.308707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOHc60YPAEU5v03feNAcBYNrfLcRKoWluR93nWE47PqtsBxBOF2f0/4AK4zHTMdWUlLjK4pj3ODrhdHMcBHz1Mw=) 2025-04-14 00:27:26.309878 | orchestrator | 2025-04-14 00:27:26.310594 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-04-14 00:27:26.311673 | orchestrator | Monday 14 April 2025 00:27:26 +0000 (0:00:01.155) 0:00:27.189 ********** 2025-04-14 00:27:27.449988 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+lPF82Mfw+oEcGlDHh6sVq9uaaywXMYtbtVxWxKge/nkoSoTVvkWa1JaXFftytbZFxkxrLGNSKjLX7Xd1RoW1dWXpgr6H4TCYXJ/LrGpzxVDZXxN4G7iY9DyaGCg/5XL9dNtp1gm3VRdxSTCQXMK7HBXT3dHbSyEYAWZyt68tvvyhUkM+wrQK+vo9CFBWsMo3X85evVcL4ezijDVOUL/ef8aN6ozqryi7goMNfhE3xSiR/nfXxhFh7zABLvNvQY2HiPded4o33mUB0RYqsU8gRSg8qJ52b111INhWlYoj6oHltw6tyNKrz6nucDN64oYJ/My+pZu+G0ucCq9W3kkahNyUyZpmBttETP9y85tQhNpjIkuXym4F3QBJa+QB7iJOJRCwygfHwHURrhjlKG6dRALLY9tkoLldXM83SvTJIuHDk/Zw/aKd0yzK1i/2HCLbrn0PqyLZRs3rmWYQHbQeXfSt/U3QpZj/dpDnVkKTZlfg80QhPPaG062DXsVxqUU=) 2025-04-14 00:27:27.450653 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN+kV9f031zbgkrARwXqrr4F+1eRNMei/EVM4KCp9ZjaUa/kdOf1akVQ53Cjppo9g0wjsAOkKUEC5o1VoSvKV0Y=) 2025-04-14 00:27:27.451112 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICbCglWSgSUAmYlGKfAghyv7WAFizq0MMX785h9MItVV) 2025-04-14 00:27:27.451757 | orchestrator | 2025-04-14 00:27:27.452408 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-04-14 00:27:27.453155 | orchestrator | Monday 14 April 2025 00:27:27 +0000 (0:00:01.140) 0:00:28.329 ********** 2025-04-14 00:27:27.637964 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-14 00:27:27.639286 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-14 00:27:27.639328 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-14 00:27:27.640672 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-14 00:27:27.641590 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-14 00:27:27.642794 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-14 00:27:27.643571 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-14 00:27:27.644129 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:27:27.645135 | orchestrator | 2025-04-14 00:27:27.645311 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-04-14 00:27:27.646720 | orchestrator | Monday 14 April 2025 00:27:27 +0000 (0:00:00.190) 0:00:28.520 ********** 2025-04-14 00:27:27.719106 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:27:27.719661 | orchestrator | 2025-04-14 00:27:27.719696 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-04-14 00:27:27.787265 | orchestrator | Monday 14 April 2025 00:27:27 +0000 (0:00:00.079) 0:00:28.599 ********** 2025-04-14 00:27:27.787381 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:27:27.788280 | orchestrator | 2025-04-14 00:27:27.789225 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-04-14 00:27:27.790203 | orchestrator | Monday 14 April 2025 00:27:27 +0000 (0:00:00.070) 0:00:28.669 ********** 2025-04-14 00:27:28.593101 | orchestrator | changed: [testbed-manager] 2025-04-14 00:27:28.595051 | orchestrator | 2025-04-14 00:27:28.596198 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:27:28.597444 | orchestrator | 2025-04-14 00:27:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:27:28.599001 | orchestrator | 2025-04-14 00:27:28 | INFO  | Please wait and do not abort execution. 2025-04-14 00:27:28.599041 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:27:28.600136 | orchestrator | 2025-04-14 00:27:28.601529 | orchestrator | Monday 14 April 2025 00:27:28 +0000 (0:00:00.805) 0:00:29.475 ********** 2025-04-14 00:27:28.601752 | orchestrator | =============================================================================== 2025-04-14 00:27:28.602611 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.18s 2025-04-14 00:27:28.604431 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.50s 2025-04-14 00:27:28.605298 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2025-04-14 00:27:28.605327 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-04-14 00:27:28.606499 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-04-14 00:27:28.607649 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-04-14 00:27:28.608139 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-04-14 00:27:28.609202 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-04-14 00:27:28.610819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-04-14 00:27:28.611626 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-04-14 00:27:28.612120 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-04-14 00:27:28.613407 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-04-14 00:27:28.613830 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-04-14 00:27:28.614641 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-04-14 00:27:28.615446 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-04-14 00:27:28.616515 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-04-14 00:27:28.617037 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.81s 2025-04-14 00:27:28.618196 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2025-04-14 00:27:28.619354 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2025-04-14 00:27:28.619956 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-04-14 00:27:29.006228 | orchestrator | + osism apply squid 2025-04-14 00:27:30.470485 | orchestrator | 2025-04-14 00:27:30 | INFO  | Task 54809df7-4d8c-4cc7-8fcd-e90615e8ec49 (squid) was prepared for execution. 2025-04-14 00:27:33.788150 | orchestrator | 2025-04-14 00:27:30 | INFO  | It takes a moment until task 54809df7-4d8c-4cc7-8fcd-e90615e8ec49 (squid) has been started and output is visible here. 2025-04-14 00:27:33.788330 | orchestrator | 2025-04-14 00:27:33.789425 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-04-14 00:27:33.789524 | orchestrator | 2025-04-14 00:27:33.790958 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-04-14 00:27:33.791607 | orchestrator | Monday 14 April 2025 00:27:33 +0000 (0:00:00.117) 0:00:00.117 ********** 2025-04-14 00:27:33.893802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-04-14 00:27:33.895751 | orchestrator | 2025-04-14 00:27:33.896081 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-04-14 00:27:33.897296 | orchestrator | Monday 14 April 2025 00:27:33 +0000 (0:00:00.108) 0:00:00.225 ********** 2025-04-14 00:27:35.415334 | orchestrator | ok: [testbed-manager] 2025-04-14 00:27:35.416173 | orchestrator | 2025-04-14 00:27:35.416226 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-04-14 00:27:35.420623 | orchestrator | Monday 14 April 2025 00:27:35 +0000 (0:00:01.520) 0:00:01.745 ********** 2025-04-14 00:27:36.657189 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-04-14 00:27:36.657622 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-04-14 00:27:36.658442 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-04-14 00:27:36.659325 | orchestrator | 2025-04-14 00:27:36.659767 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-04-14 00:27:36.660621 | orchestrator | Monday 14 April 2025 00:27:36 +0000 (0:00:01.239) 0:00:02.985 ********** 2025-04-14 00:27:37.867146 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-04-14 00:27:37.867426 | orchestrator | 2025-04-14 00:27:37.868307 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-04-14 00:27:37.869039 | orchestrator | Monday 14 April 2025 00:27:37 +0000 (0:00:01.210) 0:00:04.195 ********** 2025-04-14 00:27:38.246703 | orchestrator | ok: [testbed-manager] 2025-04-14 00:27:38.247870 | orchestrator | 2025-04-14 00:27:38.247938 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-04-14 00:27:38.248605 | orchestrator | Monday 14 April 2025 00:27:38 +0000 (0:00:00.381) 0:00:04.577 ********** 2025-04-14 00:27:39.288020 | orchestrator | changed: [testbed-manager] 2025-04-14 00:27:39.289764 | orchestrator | 2025-04-14 00:27:39.289792 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-04-14 00:28:11.925104 | orchestrator | Monday 14 April 2025 00:27:39 +0000 (0:00:01.041) 0:00:05.618 ********** 2025-04-14 00:28:11.925256 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-04-14 00:28:24.481564 | orchestrator | ok: [testbed-manager] 2025-04-14 00:28:24.481707 | orchestrator | 2025-04-14 00:28:24.481730 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-04-14 00:28:24.481747 | orchestrator | Monday 14 April 2025 00:28:11 +0000 (0:00:32.632) 0:00:38.251 ********** 2025-04-14 00:28:24.481779 | orchestrator | changed: [testbed-manager] 2025-04-14 00:28:24.484696 | orchestrator | 2025-04-14 00:28:24.484733 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-04-14 00:28:24.484756 | orchestrator | Monday 14 April 2025 00:28:24 +0000 (0:00:12.558) 0:00:50.810 ********** 2025-04-14 00:29:24.559251 | orchestrator | Pausing for 60 seconds 2025-04-14 00:29:24.636977 | orchestrator | changed: [testbed-manager] 2025-04-14 00:29:24.637078 | orchestrator | 2025-04-14 00:29:24.637091 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-04-14 00:29:24.637102 | orchestrator | Monday 14 April 2025 00:29:24 +0000 (0:01:00.075) 0:01:50.886 ********** 2025-04-14 00:29:24.637127 | orchestrator | ok: [testbed-manager] 2025-04-14 00:29:24.638081 | orchestrator | 2025-04-14 00:29:24.639952 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-04-14 00:29:24.640540 | orchestrator | Monday 14 April 2025 00:29:24 +0000 (0:00:00.082) 0:01:50.968 ********** 2025-04-14 00:29:25.286667 | orchestrator | changed: [testbed-manager] 2025-04-14 00:29:25.286899 | orchestrator | 2025-04-14 00:29:25.288111 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:29:25.288475 | orchestrator | 2025-04-14 00:29:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:29:25.288746 | orchestrator | 2025-04-14 00:29:25 | INFO  | Please wait and do not abort execution. 2025-04-14 00:29:25.289966 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:29:25.291288 | orchestrator | 2025-04-14 00:29:25.292112 | orchestrator | Monday 14 April 2025 00:29:25 +0000 (0:00:00.648) 0:01:51.616 ********** 2025-04-14 00:29:25.293065 | orchestrator | =============================================================================== 2025-04-14 00:29:25.293621 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-04-14 00:29:25.294112 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.63s 2025-04-14 00:29:25.294837 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.56s 2025-04-14 00:29:25.295847 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.52s 2025-04-14 00:29:25.296675 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.24s 2025-04-14 00:29:25.297475 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.21s 2025-04-14 00:29:25.298204 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.04s 2025-04-14 00:29:25.298447 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-04-14 00:29:25.298861 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-04-14 00:29:25.299497 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2025-04-14 00:29:25.300590 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-04-14 00:29:25.765636 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-14 00:29:25.768580 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-04-14 00:29:25.768633 | orchestrator | ++ semver 8.1.0 9.0.0 2025-04-14 00:29:25.825740 | orchestrator | + [[ -1 -lt 0 ]] 2025-04-14 00:29:25.829478 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-04-14 00:29:25.829514 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-04-14 00:29:25.829538 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-14 00:29:25.835068 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-04-14 00:29:25.839846 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-04-14 00:29:27.318847 | orchestrator | 2025-04-14 00:29:27 | INFO  | Task 60877668-7eed-4d28-9a52-842832cafeb8 (operator) was prepared for execution. 2025-04-14 00:29:30.454968 | orchestrator | 2025-04-14 00:29:27 | INFO  | It takes a moment until task 60877668-7eed-4d28-9a52-842832cafeb8 (operator) has been started and output is visible here. 2025-04-14 00:29:30.455864 | orchestrator | 2025-04-14 00:29:33.798516 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-04-14 00:29:33.798638 | orchestrator | 2025-04-14 00:29:33.798658 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-04-14 00:29:33.798673 | orchestrator | Monday 14 April 2025 00:29:30 +0000 (0:00:00.094) 0:00:00.094 ********** 2025-04-14 00:29:33.798706 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:29:33.799541 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:33.799575 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:29:33.799781 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:29:33.801052 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:33.801377 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:33.802325 | orchestrator | 2025-04-14 00:29:33.802683 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-04-14 00:29:33.803752 | orchestrator | Monday 14 April 2025 00:29:33 +0000 (0:00:03.349) 0:00:03.444 ********** 2025-04-14 00:29:34.607597 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:34.608976 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:29:34.609356 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:34.612050 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:29:34.613731 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:34.613821 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:29:34.613921 | orchestrator | 2025-04-14 00:29:34.613995 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-04-14 00:29:34.615809 | orchestrator | 2025-04-14 00:29:34.683178 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-04-14 00:29:34.683300 | orchestrator | Monday 14 April 2025 00:29:34 +0000 (0:00:00.810) 0:00:04.254 ********** 2025-04-14 00:29:34.683337 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:29:34.705801 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:29:34.729766 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:29:34.792123 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:34.794216 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:34.795281 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:34.796720 | orchestrator | 2025-04-14 00:29:34.797602 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-04-14 00:29:34.799505 | orchestrator | Monday 14 April 2025 00:29:34 +0000 (0:00:00.183) 0:00:04.438 ********** 2025-04-14 00:29:34.880054 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:29:34.945553 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:29:35.007168 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:29:35.007986 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:35.008942 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:35.009640 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:35.010635 | orchestrator | 2025-04-14 00:29:35.011372 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-04-14 00:29:35.012181 | orchestrator | Monday 14 April 2025 00:29:35 +0000 (0:00:00.214) 0:00:04.653 ********** 2025-04-14 00:29:35.672887 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:35.673218 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:29:35.674522 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:29:35.674909 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:35.676173 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:29:35.679037 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:35.679845 | orchestrator | 2025-04-14 00:29:35.680851 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-04-14 00:29:35.682103 | orchestrator | Monday 14 April 2025 00:29:35 +0000 (0:00:00.665) 0:00:05.319 ********** 2025-04-14 00:29:36.535099 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:29:36.536636 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:29:36.536683 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:36.538117 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:29:36.538689 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:36.539629 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:36.540613 | orchestrator | 2025-04-14 00:29:36.541863 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-04-14 00:29:36.542711 | orchestrator | Monday 14 April 2025 00:29:36 +0000 (0:00:00.861) 0:00:06.180 ********** 2025-04-14 00:29:37.707485 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-04-14 00:29:37.708235 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-04-14 00:29:37.709513 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-04-14 00:29:37.710843 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-04-14 00:29:37.710987 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-04-14 00:29:37.711381 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-04-14 00:29:37.712220 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-04-14 00:29:37.712459 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-04-14 00:29:37.712489 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-04-14 00:29:37.713128 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-04-14 00:29:37.713341 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-04-14 00:29:37.713813 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-04-14 00:29:37.714085 | orchestrator | 2025-04-14 00:29:37.714235 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-04-14 00:29:37.714626 | orchestrator | Monday 14 April 2025 00:29:37 +0000 (0:00:01.171) 0:00:07.352 ********** 2025-04-14 00:29:39.085712 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:39.085883 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:29:39.085904 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:29:39.085925 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:29:39.087323 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:39.090644 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:39.091541 | orchestrator | 2025-04-14 00:29:39.091581 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-04-14 00:29:39.092805 | orchestrator | Monday 14 April 2025 00:29:39 +0000 (0:00:01.375) 0:00:08.727 ********** 2025-04-14 00:29:40.332967 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-04-14 00:29:40.401383 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-04-14 00:29:40.401561 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-04-14 00:29:40.401618 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-04-14 00:29:40.404147 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-04-14 00:29:40.406840 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-04-14 00:29:40.407066 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-04-14 00:29:40.407538 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-04-14 00:29:40.407871 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-04-14 00:29:40.408293 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-04-14 00:29:40.408687 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-04-14 00:29:40.409059 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-04-14 00:29:40.409489 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-04-14 00:29:40.409754 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-04-14 00:29:40.410247 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-04-14 00:29:40.414814 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-04-14 00:29:40.415122 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-04-14 00:29:40.415551 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-04-14 00:29:40.416582 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-04-14 00:29:40.417158 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-04-14 00:29:40.423181 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-04-14 00:29:40.425822 | orchestrator | 2025-04-14 00:29:40.425878 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-04-14 00:29:41.007252 | orchestrator | Monday 14 April 2025 00:29:40 +0000 (0:00:01.319) 0:00:10.047 ********** 2025-04-14 00:29:41.007352 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:29:41.013258 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:29:41.013501 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:41.013518 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:29:41.013525 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:41.013532 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:41.013539 | orchestrator | 2025-04-14 00:29:41.013546 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-04-14 00:29:41.013557 | orchestrator | Monday 14 April 2025 00:29:41 +0000 (0:00:00.604) 0:00:10.651 ********** 2025-04-14 00:29:41.083263 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:29:41.113257 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:29:41.147319 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:29:41.212883 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:29:41.213135 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:29:41.213197 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:29:41.214944 | orchestrator | 2025-04-14 00:29:41.215595 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-04-14 00:29:41.215645 | orchestrator | Monday 14 April 2025 00:29:41 +0000 (0:00:00.206) 0:00:10.858 ********** 2025-04-14 00:29:41.956076 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-14 00:29:41.956643 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-14 00:29:41.957327 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:29:41.957689 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:29:41.958164 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-14 00:29:41.958672 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:41.959466 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-14 00:29:41.959694 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:29:41.960110 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-14 00:29:41.960619 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:41.961354 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-14 00:29:41.961877 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:41.961922 | orchestrator | 2025-04-14 00:29:41.962305 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-04-14 00:29:41.962672 | orchestrator | Monday 14 April 2025 00:29:41 +0000 (0:00:00.742) 0:00:11.600 ********** 2025-04-14 00:29:42.002813 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:29:42.029535 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:29:42.051312 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:29:42.117468 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:29:42.118783 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:29:42.119116 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:29:42.120540 | orchestrator | 2025-04-14 00:29:42.121367 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-04-14 00:29:42.121862 | orchestrator | Monday 14 April 2025 00:29:42 +0000 (0:00:00.163) 0:00:11.763 ********** 2025-04-14 00:29:42.191830 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:29:42.225105 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:29:42.256581 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:29:42.305351 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:29:42.305963 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:29:42.306741 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:29:42.307784 | orchestrator | 2025-04-14 00:29:42.309146 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-04-14 00:29:42.310394 | orchestrator | Monday 14 April 2025 00:29:42 +0000 (0:00:00.188) 0:00:11.952 ********** 2025-04-14 00:29:42.381212 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:29:42.395326 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:29:42.418950 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:29:42.462430 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:29:42.463707 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:29:42.465198 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:29:42.466704 | orchestrator | 2025-04-14 00:29:42.468011 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-04-14 00:29:42.468852 | orchestrator | Monday 14 April 2025 00:29:42 +0000 (0:00:00.156) 0:00:12.109 ********** 2025-04-14 00:29:43.191509 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:29:43.192102 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:29:43.192727 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:29:43.194766 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:43.195466 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:43.195498 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:43.196358 | orchestrator | 2025-04-14 00:29:43.196848 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-04-14 00:29:43.197366 | orchestrator | Monday 14 April 2025 00:29:43 +0000 (0:00:00.724) 0:00:12.834 ********** 2025-04-14 00:29:43.271223 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:29:43.319831 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:29:43.414892 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:29:43.416504 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:29:43.418084 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:29:43.419621 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:29:43.421134 | orchestrator | 2025-04-14 00:29:43.422531 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:29:43.423311 | orchestrator | 2025-04-14 00:29:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:29:43.424365 | orchestrator | 2025-04-14 00:29:43 | INFO  | Please wait and do not abort execution. 2025-04-14 00:29:43.425494 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:29:43.426869 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:29:43.428682 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:29:43.430013 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:29:43.430495 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:29:43.432039 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:29:43.433035 | orchestrator | 2025-04-14 00:29:43.434168 | orchestrator | Monday 14 April 2025 00:29:43 +0000 (0:00:00.227) 0:00:13.062 ********** 2025-04-14 00:29:43.435131 | orchestrator | =============================================================================== 2025-04-14 00:29:43.435580 | orchestrator | Gathering Facts --------------------------------------------------------- 3.35s 2025-04-14 00:29:43.437804 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.38s 2025-04-14 00:29:43.439069 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.32s 2025-04-14 00:29:43.439730 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2025-04-14 00:29:43.439955 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.86s 2025-04-14 00:29:43.440854 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2025-04-14 00:29:43.443109 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-04-14 00:29:43.444165 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2025-04-14 00:29:43.445736 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2025-04-14 00:29:43.446674 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-04-14 00:29:43.447616 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-04-14 00:29:43.448619 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.21s 2025-04-14 00:29:43.449608 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2025-04-14 00:29:43.452213 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2025-04-14 00:29:43.452675 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-04-14 00:29:43.453646 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-04-14 00:29:43.454511 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-04-14 00:29:43.872718 | orchestrator | + osism apply --environment custom facts 2025-04-14 00:29:45.302012 | orchestrator | 2025-04-14 00:29:45 | INFO  | Trying to run play facts in environment custom 2025-04-14 00:29:45.350292 | orchestrator | 2025-04-14 00:29:45 | INFO  | Task 0136f6d2-3551-47ac-8b9b-acf9765be707 (facts) was prepared for execution. 2025-04-14 00:29:45.350652 | orchestrator | 2025-04-14 00:29:45 | INFO  | It takes a moment until task 0136f6d2-3551-47ac-8b9b-acf9765be707 (facts) has been started and output is visible here. 2025-04-14 00:29:48.584619 | orchestrator | 2025-04-14 00:29:48.588265 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-04-14 00:29:48.590508 | orchestrator | 2025-04-14 00:29:48.590881 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-14 00:29:48.590919 | orchestrator | Monday 14 April 2025 00:29:48 +0000 (0:00:00.089) 0:00:00.089 ********** 2025-04-14 00:29:50.100691 | orchestrator | ok: [testbed-manager] 2025-04-14 00:29:50.104109 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:50.104707 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:29:50.104753 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:29:50.107324 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:50.109647 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:29:50.113737 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:50.114960 | orchestrator | 2025-04-14 00:29:50.114999 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-04-14 00:29:50.115023 | orchestrator | Monday 14 April 2025 00:29:50 +0000 (0:00:01.517) 0:00:01.606 ********** 2025-04-14 00:29:51.374797 | orchestrator | ok: [testbed-manager] 2025-04-14 00:29:51.376246 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:29:51.376879 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:51.376918 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:29:51.381594 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:29:51.382280 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:51.382311 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:51.382557 | orchestrator | 2025-04-14 00:29:51.383467 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-04-14 00:29:51.383800 | orchestrator | 2025-04-14 00:29:51.384801 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-14 00:29:51.385664 | orchestrator | Monday 14 April 2025 00:29:51 +0000 (0:00:01.273) 0:00:02.879 ********** 2025-04-14 00:29:51.476203 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:51.476842 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:51.477915 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:51.479042 | orchestrator | 2025-04-14 00:29:51.479810 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-14 00:29:51.480541 | orchestrator | Monday 14 April 2025 00:29:51 +0000 (0:00:00.105) 0:00:02.984 ********** 2025-04-14 00:29:51.635762 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:51.640653 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:51.640858 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:51.640888 | orchestrator | 2025-04-14 00:29:51.640905 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-14 00:29:51.640926 | orchestrator | Monday 14 April 2025 00:29:51 +0000 (0:00:00.158) 0:00:03.143 ********** 2025-04-14 00:29:51.769809 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:51.771368 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:51.773898 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:51.774731 | orchestrator | 2025-04-14 00:29:51.775704 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-14 00:29:51.778767 | orchestrator | Monday 14 April 2025 00:29:51 +0000 (0:00:00.134) 0:00:03.278 ********** 2025-04-14 00:29:51.936752 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:29:51.938075 | orchestrator | 2025-04-14 00:29:51.942230 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-14 00:29:51.943255 | orchestrator | Monday 14 April 2025 00:29:51 +0000 (0:00:00.165) 0:00:03.443 ********** 2025-04-14 00:29:52.396201 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:52.400909 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:52.401014 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:52.404552 | orchestrator | 2025-04-14 00:29:52.519260 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-14 00:29:52.519378 | orchestrator | Monday 14 April 2025 00:29:52 +0000 (0:00:00.461) 0:00:03.905 ********** 2025-04-14 00:29:52.519453 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:29:52.523107 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:29:52.523522 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:29:52.523559 | orchestrator | 2025-04-14 00:29:52.524546 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-14 00:29:52.525312 | orchestrator | Monday 14 April 2025 00:29:52 +0000 (0:00:00.120) 0:00:04.026 ********** 2025-04-14 00:29:53.553302 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:53.554133 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:53.556503 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:53.557217 | orchestrator | 2025-04-14 00:29:53.558321 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-14 00:29:53.559105 | orchestrator | Monday 14 April 2025 00:29:53 +0000 (0:00:01.033) 0:00:05.059 ********** 2025-04-14 00:29:54.017567 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:29:54.017795 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:29:54.018209 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:29:54.019051 | orchestrator | 2025-04-14 00:29:54.019758 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-14 00:29:54.023444 | orchestrator | Monday 14 April 2025 00:29:54 +0000 (0:00:00.465) 0:00:05.524 ********** 2025-04-14 00:29:55.161616 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:29:55.164476 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:29:55.164567 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:29:55.165030 | orchestrator | 2025-04-14 00:29:55.165972 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-14 00:29:55.166462 | orchestrator | Monday 14 April 2025 00:29:55 +0000 (0:00:01.143) 0:00:06.668 ********** 2025-04-14 00:30:08.117956 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:30:08.121594 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:30:08.121642 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:30:08.121658 | orchestrator | 2025-04-14 00:30:08.121684 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-04-14 00:30:08.213525 | orchestrator | Monday 14 April 2025 00:30:08 +0000 (0:00:12.952) 0:00:19.620 ********** 2025-04-14 00:30:08.213689 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:30:08.214080 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:30:08.218806 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:30:08.219646 | orchestrator | 2025-04-14 00:30:08.220421 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-04-14 00:30:08.221470 | orchestrator | Monday 14 April 2025 00:30:08 +0000 (0:00:00.101) 0:00:19.722 ********** 2025-04-14 00:30:15.205494 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:30:15.206470 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:30:15.208373 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:30:15.210516 | orchestrator | 2025-04-14 00:30:15.210986 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-04-14 00:30:15.211782 | orchestrator | Monday 14 April 2025 00:30:15 +0000 (0:00:06.990) 0:00:26.712 ********** 2025-04-14 00:30:15.659717 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:15.659969 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:15.661257 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:15.662482 | orchestrator | 2025-04-14 00:30:15.663320 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-04-14 00:30:15.664261 | orchestrator | Monday 14 April 2025 00:30:15 +0000 (0:00:00.453) 0:00:27.166 ********** 2025-04-14 00:30:19.106324 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-04-14 00:30:19.106631 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-04-14 00:30:19.106672 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-04-14 00:30:19.107048 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-04-14 00:30:19.108046 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-04-14 00:30:19.108443 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-04-14 00:30:19.108691 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-04-14 00:30:19.109430 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-04-14 00:30:19.110539 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-04-14 00:30:19.111563 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-04-14 00:30:19.111811 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-04-14 00:30:19.111834 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-04-14 00:30:19.112243 | orchestrator | 2025-04-14 00:30:19.112587 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-14 00:30:19.113017 | orchestrator | Monday 14 April 2025 00:30:19 +0000 (0:00:03.445) 0:00:30.611 ********** 2025-04-14 00:30:20.114002 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:20.114295 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:20.114327 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:20.115961 | orchestrator | 2025-04-14 00:30:20.116672 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-14 00:30:20.117942 | orchestrator | 2025-04-14 00:30:20.119273 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-14 00:30:20.119955 | orchestrator | Monday 14 April 2025 00:30:20 +0000 (0:00:01.008) 0:00:31.620 ********** 2025-04-14 00:30:24.524051 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:24.524651 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:24.525033 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:24.526142 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:24.527507 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:24.528410 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:24.529384 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:24.529706 | orchestrator | 2025-04-14 00:30:24.530154 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:30:24.530711 | orchestrator | 2025-04-14 00:30:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:30:24.530848 | orchestrator | 2025-04-14 00:30:24 | INFO  | Please wait and do not abort execution. 2025-04-14 00:30:24.531683 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:30:24.532742 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:30:24.533229 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:30:24.537190 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:30:24.537515 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:30:24.537546 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:30:24.537560 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:30:24.537603 | orchestrator | 2025-04-14 00:30:24.537623 | orchestrator | Monday 14 April 2025 00:30:24 +0000 (0:00:04.410) 0:00:36.031 ********** 2025-04-14 00:30:24.538094 | orchestrator | =============================================================================== 2025-04-14 00:30:24.538121 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.95s 2025-04-14 00:30:24.538140 | orchestrator | Install required packages (Debian) -------------------------------------- 6.99s 2025-04-14 00:30:24.538844 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.41s 2025-04-14 00:30:24.539287 | orchestrator | Copy fact files --------------------------------------------------------- 3.45s 2025-04-14 00:30:24.539387 | orchestrator | Create custom facts directory ------------------------------------------- 1.52s 2025-04-14 00:30:24.539570 | orchestrator | Copy fact file ---------------------------------------------------------- 1.27s 2025-04-14 00:30:24.540242 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.14s 2025-04-14 00:30:24.540529 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-04-14 00:30:24.540799 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.01s 2025-04-14 00:30:24.541459 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-04-14 00:30:24.541757 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2025-04-14 00:30:24.542103 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-04-14 00:30:24.542664 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-04-14 00:30:24.544045 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2025-04-14 00:30:24.544252 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.13s 2025-04-14 00:30:24.544311 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2025-04-14 00:30:24.544331 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-04-14 00:30:24.544591 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-04-14 00:30:24.981274 | orchestrator | + osism apply bootstrap 2025-04-14 00:30:26.669618 | orchestrator | 2025-04-14 00:30:26 | INFO  | Task 2772fb7e-70f8-4ce1-9dff-ec5ffa322e51 (bootstrap) was prepared for execution. 2025-04-14 00:30:29.955599 | orchestrator | 2025-04-14 00:30:26 | INFO  | It takes a moment until task 2772fb7e-70f8-4ce1-9dff-ec5ffa322e51 (bootstrap) has been started and output is visible here. 2025-04-14 00:30:29.955780 | orchestrator | 2025-04-14 00:30:29.957881 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-04-14 00:30:29.958911 | orchestrator | 2025-04-14 00:30:29.958972 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-04-14 00:30:29.959003 | orchestrator | Monday 14 April 2025 00:30:29 +0000 (0:00:00.116) 0:00:00.116 ********** 2025-04-14 00:30:30.044867 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:30.073111 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:30.102057 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:30.129084 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:30.223580 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:30.224528 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:30.228594 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:30.229208 | orchestrator | 2025-04-14 00:30:30.230011 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-14 00:30:30.230511 | orchestrator | 2025-04-14 00:30:30.231375 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-14 00:30:30.232148 | orchestrator | Monday 14 April 2025 00:30:30 +0000 (0:00:00.270) 0:00:00.386 ********** 2025-04-14 00:30:34.855019 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:34.855969 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:34.856031 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:34.858658 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:34.859056 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:34.859080 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:34.859096 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:34.861265 | orchestrator | 2025-04-14 00:30:34.861982 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-04-14 00:30:34.862361 | orchestrator | 2025-04-14 00:30:34.863073 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-14 00:30:34.863867 | orchestrator | Monday 14 April 2025 00:30:34 +0000 (0:00:04.631) 0:00:05.018 ********** 2025-04-14 00:30:34.953981 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-04-14 00:30:34.994616 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-04-14 00:30:34.994877 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-04-14 00:30:34.995494 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 00:30:35.048046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-04-14 00:30:35.048477 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-04-14 00:30:35.048890 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 00:30:35.050169 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-04-14 00:30:35.339748 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-04-14 00:30:35.341183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:30:35.345195 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 00:30:35.346519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:30:35.347713 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-04-14 00:30:35.348347 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-04-14 00:30:35.349360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 00:30:35.350114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:30:35.350810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:30:35.351430 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-04-14 00:30:35.351872 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-04-14 00:30:35.352551 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 00:30:35.353002 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 00:30:35.353619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:30:35.354107 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-04-14 00:30:35.354618 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:30:35.355245 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-14 00:30:35.355543 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 00:30:35.356467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:30:35.356664 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:30:35.357694 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-04-14 00:30:35.358114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:30:35.358621 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-14 00:30:35.359310 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 00:30:35.359995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-14 00:30:35.360462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 00:30:35.361085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:30:35.361526 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-14 00:30:35.362217 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-14 00:30:35.364554 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 00:30:35.367211 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-14 00:30:35.368567 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-14 00:30:35.372657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 00:30:35.373600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:30:35.374422 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:30:35.376331 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-14 00:30:35.379085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 00:30:35.381785 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-14 00:30:35.383771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 00:30:35.385853 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-14 00:30:35.388790 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 00:30:35.392518 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:30:35.392850 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-14 00:30:35.394417 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:30:35.394819 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 00:30:35.394841 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:30:35.395668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-14 00:30:35.395888 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:30:35.396495 | orchestrator | 2025-04-14 00:30:35.396928 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-04-14 00:30:35.397660 | orchestrator | 2025-04-14 00:30:35.399159 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-04-14 00:30:35.399863 | orchestrator | Monday 14 April 2025 00:30:35 +0000 (0:00:00.484) 0:00:05.503 ********** 2025-04-14 00:30:35.468514 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:35.509479 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:35.546725 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:35.572963 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:35.630980 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:35.631340 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:35.633266 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:35.633954 | orchestrator | 2025-04-14 00:30:35.634679 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-04-14 00:30:35.635800 | orchestrator | Monday 14 April 2025 00:30:35 +0000 (0:00:00.291) 0:00:05.794 ********** 2025-04-14 00:30:36.892073 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:36.892789 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:36.892821 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:36.894366 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:36.895579 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:36.897443 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:36.898500 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:36.899367 | orchestrator | 2025-04-14 00:30:36.900767 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-04-14 00:30:36.901780 | orchestrator | Monday 14 April 2025 00:30:36 +0000 (0:00:01.260) 0:00:07.054 ********** 2025-04-14 00:30:38.172365 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:38.173883 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:38.175066 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:38.176277 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:38.177291 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:38.177810 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:38.179259 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:38.180354 | orchestrator | 2025-04-14 00:30:38.181184 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-04-14 00:30:38.182581 | orchestrator | Monday 14 April 2025 00:30:38 +0000 (0:00:01.279) 0:00:08.334 ********** 2025-04-14 00:30:38.456361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:30:38.460020 | orchestrator | 2025-04-14 00:30:38.461728 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-04-14 00:30:38.463686 | orchestrator | Monday 14 April 2025 00:30:38 +0000 (0:00:00.283) 0:00:08.617 ********** 2025-04-14 00:30:40.651575 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:30:40.653426 | orchestrator | changed: [testbed-manager] 2025-04-14 00:30:40.654309 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:30:40.654357 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:30:40.655636 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:30:40.657509 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:30:40.659657 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:30:40.725068 | orchestrator | 2025-04-14 00:30:40.725134 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-04-14 00:30:40.725149 | orchestrator | Monday 14 April 2025 00:30:40 +0000 (0:00:02.194) 0:00:10.812 ********** 2025-04-14 00:30:40.725173 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:30:40.916050 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:30:40.916266 | orchestrator | 2025-04-14 00:30:40.917130 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-04-14 00:30:40.920609 | orchestrator | Monday 14 April 2025 00:30:40 +0000 (0:00:00.266) 0:00:11.078 ********** 2025-04-14 00:30:41.902680 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:30:41.904230 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:30:41.904291 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:30:41.905508 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:30:41.905608 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:30:41.905632 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:30:41.906504 | orchestrator | 2025-04-14 00:30:41.906550 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-04-14 00:30:41.978922 | orchestrator | Monday 14 April 2025 00:30:41 +0000 (0:00:00.986) 0:00:12.065 ********** 2025-04-14 00:30:41.979017 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:30:42.459926 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:30:42.460132 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:30:42.460162 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:30:42.461121 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:30:42.461755 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:30:42.462507 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:30:42.462938 | orchestrator | 2025-04-14 00:30:42.464624 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-04-14 00:30:42.562975 | orchestrator | Monday 14 April 2025 00:30:42 +0000 (0:00:00.557) 0:00:12.623 ********** 2025-04-14 00:30:42.563103 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:30:42.603801 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:30:42.623164 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:30:42.945221 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:30:42.945583 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:30:42.946864 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:30:42.946903 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:42.949407 | orchestrator | 2025-04-14 00:30:43.019862 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-04-14 00:30:43.019959 | orchestrator | Monday 14 April 2025 00:30:42 +0000 (0:00:00.483) 0:00:13.107 ********** 2025-04-14 00:30:43.019987 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:30:43.048753 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:30:43.072532 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:30:43.097173 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:30:43.154132 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:30:43.154465 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:30:43.155469 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:30:43.156058 | orchestrator | 2025-04-14 00:30:43.157474 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-04-14 00:30:43.157923 | orchestrator | Monday 14 April 2025 00:30:43 +0000 (0:00:00.211) 0:00:13.318 ********** 2025-04-14 00:30:43.452115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:30:43.452624 | orchestrator | 2025-04-14 00:30:43.452976 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-04-14 00:30:43.453008 | orchestrator | Monday 14 April 2025 00:30:43 +0000 (0:00:00.297) 0:00:13.616 ********** 2025-04-14 00:30:43.770710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:30:43.772854 | orchestrator | 2025-04-14 00:30:44.929428 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-04-14 00:30:44.929555 | orchestrator | Monday 14 April 2025 00:30:43 +0000 (0:00:00.315) 0:00:13.931 ********** 2025-04-14 00:30:44.929592 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:44.929676 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:44.932905 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:44.933043 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:44.933065 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:44.933080 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:44.933094 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:44.933113 | orchestrator | 2025-04-14 00:30:44.933318 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-04-14 00:30:44.933350 | orchestrator | Monday 14 April 2025 00:30:44 +0000 (0:00:01.159) 0:00:15.090 ********** 2025-04-14 00:30:45.003300 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:30:45.032885 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:30:45.062775 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:30:45.096850 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:30:45.177209 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:30:45.178480 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:30:45.179606 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:30:45.180968 | orchestrator | 2025-04-14 00:30:45.182064 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-04-14 00:30:45.183374 | orchestrator | Monday 14 April 2025 00:30:45 +0000 (0:00:00.250) 0:00:15.340 ********** 2025-04-14 00:30:45.722930 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:45.723235 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:45.724693 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:45.725160 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:45.726132 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:45.727942 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:45.728538 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:45.729529 | orchestrator | 2025-04-14 00:30:45.730316 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-04-14 00:30:45.732530 | orchestrator | Monday 14 April 2025 00:30:45 +0000 (0:00:00.544) 0:00:15.885 ********** 2025-04-14 00:30:45.815856 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:30:45.843501 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:30:45.871133 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:30:45.899728 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:30:45.999213 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:30:45.999519 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:30:46.000272 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:30:46.001086 | orchestrator | 2025-04-14 00:30:46.001892 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-04-14 00:30:46.002936 | orchestrator | Monday 14 April 2025 00:30:45 +0000 (0:00:00.276) 0:00:16.162 ********** 2025-04-14 00:30:46.534483 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:46.535138 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:30:46.538857 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:30:46.540209 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:30:46.540244 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:30:46.540259 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:30:46.540273 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:30:46.540293 | orchestrator | 2025-04-14 00:30:46.540849 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-04-14 00:30:46.540886 | orchestrator | Monday 14 April 2025 00:30:46 +0000 (0:00:00.535) 0:00:16.697 ********** 2025-04-14 00:30:47.667602 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:47.667789 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:30:47.669554 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:30:47.670100 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:30:47.670141 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:30:47.670582 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:30:47.671022 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:30:47.672440 | orchestrator | 2025-04-14 00:30:47.673276 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-04-14 00:30:47.673889 | orchestrator | Monday 14 April 2025 00:30:47 +0000 (0:00:01.131) 0:00:17.829 ********** 2025-04-14 00:30:48.781566 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:48.782196 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:48.782693 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:48.783550 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:48.784232 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:48.785348 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:48.785545 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:48.786222 | orchestrator | 2025-04-14 00:30:48.786905 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-04-14 00:30:48.787108 | orchestrator | Monday 14 April 2025 00:30:48 +0000 (0:00:01.114) 0:00:18.944 ********** 2025-04-14 00:30:49.109532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:30:49.110371 | orchestrator | 2025-04-14 00:30:49.110791 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-04-14 00:30:49.112966 | orchestrator | Monday 14 April 2025 00:30:49 +0000 (0:00:00.322) 0:00:19.266 ********** 2025-04-14 00:30:49.182748 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:30:50.575594 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:30:50.575808 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:30:50.575840 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:30:50.576663 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:30:50.576889 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:30:50.577361 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:30:50.578855 | orchestrator | 2025-04-14 00:30:50.579572 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-04-14 00:30:50.580070 | orchestrator | Monday 14 April 2025 00:30:50 +0000 (0:00:01.472) 0:00:20.738 ********** 2025-04-14 00:30:50.659920 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:50.692861 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:50.721312 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:50.747706 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:50.810336 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:50.810889 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:50.814708 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:50.888499 | orchestrator | 2025-04-14 00:30:50.888663 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-04-14 00:30:50.888685 | orchestrator | Monday 14 April 2025 00:30:50 +0000 (0:00:00.234) 0:00:20.973 ********** 2025-04-14 00:30:50.888719 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:50.914821 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:50.939442 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:50.965701 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:51.043308 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:51.044089 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:51.045301 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:51.045995 | orchestrator | 2025-04-14 00:30:51.047450 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-04-14 00:30:51.048222 | orchestrator | Monday 14 April 2025 00:30:51 +0000 (0:00:00.232) 0:00:21.206 ********** 2025-04-14 00:30:51.118779 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:51.148221 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:51.174068 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:51.203023 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:51.284994 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:51.285626 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:51.286512 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:51.286933 | orchestrator | 2025-04-14 00:30:51.287681 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-04-14 00:30:51.288589 | orchestrator | Monday 14 April 2025 00:30:51 +0000 (0:00:00.241) 0:00:21.448 ********** 2025-04-14 00:30:51.609198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:30:51.610223 | orchestrator | 2025-04-14 00:30:51.611849 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-04-14 00:30:51.612870 | orchestrator | Monday 14 April 2025 00:30:51 +0000 (0:00:00.324) 0:00:21.772 ********** 2025-04-14 00:30:52.125678 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:52.128752 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:52.130604 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:52.130682 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:52.131304 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:52.132440 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:52.133334 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:52.133861 | orchestrator | 2025-04-14 00:30:52.134654 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-04-14 00:30:52.135744 | orchestrator | Monday 14 April 2025 00:30:52 +0000 (0:00:00.514) 0:00:22.287 ********** 2025-04-14 00:30:52.214161 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:30:52.238465 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:30:52.270602 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:30:52.298471 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:30:52.367221 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:30:52.367378 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:30:52.368602 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:30:52.369762 | orchestrator | 2025-04-14 00:30:52.370521 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-04-14 00:30:52.371774 | orchestrator | Monday 14 April 2025 00:30:52 +0000 (0:00:00.242) 0:00:22.530 ********** 2025-04-14 00:30:53.427300 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:53.429195 | orchestrator | changed: [testbed-manager] 2025-04-14 00:30:53.429487 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:53.430268 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:53.430722 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:30:53.431673 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:30:53.432087 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:30:53.432728 | orchestrator | 2025-04-14 00:30:53.433539 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-04-14 00:30:53.434359 | orchestrator | Monday 14 April 2025 00:30:53 +0000 (0:00:01.059) 0:00:23.589 ********** 2025-04-14 00:30:53.968643 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:53.969510 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:53.969558 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:53.969582 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:53.970347 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:30:53.970727 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:30:53.971097 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:30:53.971742 | orchestrator | 2025-04-14 00:30:53.972168 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-04-14 00:30:53.972880 | orchestrator | Monday 14 April 2025 00:30:53 +0000 (0:00:00.538) 0:00:24.128 ********** 2025-04-14 00:30:55.087205 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:30:55.088037 | orchestrator | ok: [testbed-manager] 2025-04-14 00:30:55.089323 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:30:55.090866 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:30:55.091885 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:30:55.092639 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:30:55.093589 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:30:55.094588 | orchestrator | 2025-04-14 00:30:55.095743 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-04-14 00:30:55.096552 | orchestrator | Monday 14 April 2025 00:30:55 +0000 (0:00:01.120) 0:00:25.248 ********** 2025-04-14 00:31:07.778014 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:07.861948 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:07.862130 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:07.862149 | orchestrator | changed: [testbed-manager] 2025-04-14 00:31:07.862164 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:31:07.862177 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:31:07.862190 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:31:07.862203 | orchestrator | 2025-04-14 00:31:07.862216 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-04-14 00:31:07.862231 | orchestrator | Monday 14 April 2025 00:31:07 +0000 (0:00:12.687) 0:00:37.935 ********** 2025-04-14 00:31:07.862260 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:07.882561 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:07.930245 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:07.952421 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:08.030851 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:08.031049 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:08.032327 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:08.033116 | orchestrator | 2025-04-14 00:31:08.034005 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-04-14 00:31:08.035126 | orchestrator | Monday 14 April 2025 00:31:08 +0000 (0:00:00.258) 0:00:38.194 ********** 2025-04-14 00:31:08.112315 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:08.149222 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:08.174012 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:08.206856 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:08.285186 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:08.285504 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:08.285543 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:08.285627 | orchestrator | 2025-04-14 00:31:08.285817 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-04-14 00:31:08.287088 | orchestrator | Monday 14 April 2025 00:31:08 +0000 (0:00:00.254) 0:00:38.448 ********** 2025-04-14 00:31:08.362324 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:08.390332 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:08.415485 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:08.443962 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:08.523514 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:08.523773 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:08.524373 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:08.525134 | orchestrator | 2025-04-14 00:31:08.525668 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-04-14 00:31:08.526250 | orchestrator | Monday 14 April 2025 00:31:08 +0000 (0:00:00.238) 0:00:38.687 ********** 2025-04-14 00:31:08.881089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:31:08.881272 | orchestrator | 2025-04-14 00:31:08.881304 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-04-14 00:31:08.881633 | orchestrator | Monday 14 April 2025 00:31:08 +0000 (0:00:00.355) 0:00:39.043 ********** 2025-04-14 00:31:10.488129 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:10.488302 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:10.492326 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:10.493955 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:10.494656 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:10.495649 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:10.496002 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:10.496590 | orchestrator | 2025-04-14 00:31:10.497134 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-04-14 00:31:10.497981 | orchestrator | Monday 14 April 2025 00:31:10 +0000 (0:00:01.607) 0:00:40.650 ********** 2025-04-14 00:31:11.588622 | orchestrator | changed: [testbed-manager] 2025-04-14 00:31:11.588819 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:31:11.591524 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:31:11.592913 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:31:11.592976 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:31:11.593002 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:31:11.594427 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:31:11.595064 | orchestrator | 2025-04-14 00:31:11.595775 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-04-14 00:31:11.596815 | orchestrator | Monday 14 April 2025 00:31:11 +0000 (0:00:01.098) 0:00:41.749 ********** 2025-04-14 00:31:12.419755 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:12.420931 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:12.421953 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:12.422005 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:12.422773 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:12.423793 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:12.423890 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:12.424591 | orchestrator | 2025-04-14 00:31:12.425744 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-04-14 00:31:12.427251 | orchestrator | Monday 14 April 2025 00:31:12 +0000 (0:00:00.832) 0:00:42.581 ********** 2025-04-14 00:31:12.749312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:31:12.750766 | orchestrator | 2025-04-14 00:31:12.751168 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-04-14 00:31:12.751943 | orchestrator | Monday 14 April 2025 00:31:12 +0000 (0:00:00.330) 0:00:42.911 ********** 2025-04-14 00:31:13.808873 | orchestrator | changed: [testbed-manager] 2025-04-14 00:31:13.809177 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:31:13.809241 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:31:13.810473 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:31:13.810846 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:31:13.812066 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:31:13.812346 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:31:13.812877 | orchestrator | 2025-04-14 00:31:13.813893 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-04-14 00:31:13.814458 | orchestrator | Monday 14 April 2025 00:31:13 +0000 (0:00:01.058) 0:00:43.969 ********** 2025-04-14 00:31:13.893914 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:31:13.925032 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:31:13.949248 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:31:13.976168 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:31:14.130451 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:31:14.131168 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:31:14.132677 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:31:14.133900 | orchestrator | 2025-04-14 00:31:14.134781 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-04-14 00:31:14.135668 | orchestrator | Monday 14 April 2025 00:31:14 +0000 (0:00:00.324) 0:00:44.293 ********** 2025-04-14 00:31:26.709514 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:31:26.710445 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:31:26.710466 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:31:26.710477 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:31:26.711742 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:31:26.712853 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:31:26.714495 | orchestrator | changed: [testbed-manager] 2025-04-14 00:31:26.715815 | orchestrator | 2025-04-14 00:31:26.717133 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-04-14 00:31:26.717871 | orchestrator | Monday 14 April 2025 00:31:26 +0000 (0:00:12.572) 0:00:56.866 ********** 2025-04-14 00:31:27.658944 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:27.659265 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:27.660033 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:27.660723 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:27.661428 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:27.661968 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:27.662621 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:27.663323 | orchestrator | 2025-04-14 00:31:27.664070 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-04-14 00:31:27.664601 | orchestrator | Monday 14 April 2025 00:31:27 +0000 (0:00:00.956) 0:00:57.822 ********** 2025-04-14 00:31:28.533471 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:28.537769 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:28.538420 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:28.539347 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:28.540745 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:28.542721 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:28.543879 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:28.544553 | orchestrator | 2025-04-14 00:31:28.546979 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-04-14 00:31:28.547751 | orchestrator | Monday 14 April 2025 00:31:28 +0000 (0:00:00.872) 0:00:58.694 ********** 2025-04-14 00:31:28.612245 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:28.649757 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:28.677787 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:28.707195 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:28.776771 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:28.777639 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:28.778676 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:28.779801 | orchestrator | 2025-04-14 00:31:28.780816 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-04-14 00:31:28.781611 | orchestrator | Monday 14 April 2025 00:31:28 +0000 (0:00:00.243) 0:00:58.938 ********** 2025-04-14 00:31:28.856188 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:28.891589 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:28.916734 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:28.959516 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:29.038317 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:29.039715 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:29.040745 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:29.042358 | orchestrator | 2025-04-14 00:31:29.043336 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-04-14 00:31:29.044095 | orchestrator | Monday 14 April 2025 00:31:29 +0000 (0:00:00.263) 0:00:59.202 ********** 2025-04-14 00:31:29.387978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:31:29.388627 | orchestrator | 2025-04-14 00:31:29.391681 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-04-14 00:31:30.851366 | orchestrator | Monday 14 April 2025 00:31:29 +0000 (0:00:00.348) 0:00:59.550 ********** 2025-04-14 00:31:30.851543 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:30.852543 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:30.853701 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:30.855069 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:30.855733 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:30.856363 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:30.857239 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:30.857791 | orchestrator | 2025-04-14 00:31:30.858495 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-04-14 00:31:30.859264 | orchestrator | Monday 14 April 2025 00:31:30 +0000 (0:00:01.461) 0:01:01.012 ********** 2025-04-14 00:31:31.447293 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:31:31.447906 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:31:31.448019 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:31:31.448613 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:31:31.449188 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:31:31.449956 | orchestrator | changed: [testbed-manager] 2025-04-14 00:31:31.450306 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:31:31.451165 | orchestrator | 2025-04-14 00:31:31.451491 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-04-14 00:31:31.451911 | orchestrator | Monday 14 April 2025 00:31:31 +0000 (0:00:00.598) 0:01:01.610 ********** 2025-04-14 00:31:31.537033 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:31.564115 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:31.589940 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:31.617713 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:31.704606 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:31.705810 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:31.708726 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:31.709236 | orchestrator | 2025-04-14 00:31:31.709262 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-04-14 00:31:31.709278 | orchestrator | Monday 14 April 2025 00:31:31 +0000 (0:00:00.257) 0:01:01.867 ********** 2025-04-14 00:31:32.751998 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:32.752355 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:32.753741 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:32.757719 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:32.758448 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:32.759435 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:32.760208 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:32.760739 | orchestrator | 2025-04-14 00:31:32.761351 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-04-14 00:31:32.761941 | orchestrator | Monday 14 April 2025 00:31:32 +0000 (0:00:01.044) 0:01:02.912 ********** 2025-04-14 00:31:34.296208 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:31:34.296506 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:31:34.297563 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:31:34.298636 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:31:34.300138 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:31:34.302432 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:31:34.302476 | orchestrator | changed: [testbed-manager] 2025-04-14 00:31:34.303501 | orchestrator | 2025-04-14 00:31:34.304337 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-04-14 00:31:34.304727 | orchestrator | Monday 14 April 2025 00:31:34 +0000 (0:00:01.545) 0:01:04.458 ********** 2025-04-14 00:31:36.393476 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:31:36.393816 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:31:36.394764 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:31:36.396406 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:31:36.397091 | orchestrator | ok: [testbed-manager] 2025-04-14 00:31:36.398116 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:31:36.398894 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:31:36.399185 | orchestrator | 2025-04-14 00:31:36.399581 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-04-14 00:31:36.400293 | orchestrator | Monday 14 April 2025 00:31:36 +0000 (0:00:02.095) 0:01:06.553 ********** 2025-04-14 00:32:13.835423 | orchestrator | ok: [testbed-manager] 2025-04-14 00:32:13.835586 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:32:13.835600 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:32:13.835608 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:32:13.835616 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:32:13.835628 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:32:13.836982 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:32:13.837721 | orchestrator | 2025-04-14 00:32:13.838069 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-04-14 00:32:13.838456 | orchestrator | Monday 14 April 2025 00:32:13 +0000 (0:00:37.439) 0:01:43.993 ********** 2025-04-14 00:33:33.327932 | orchestrator | changed: [testbed-manager] 2025-04-14 00:33:33.328614 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:33:33.328655 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:33:33.328673 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:33:33.328696 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:33:33.329247 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:33:33.330461 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:33:33.330999 | orchestrator | 2025-04-14 00:33:33.331744 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-04-14 00:33:33.332443 | orchestrator | Monday 14 April 2025 00:33:33 +0000 (0:01:19.491) 0:03:03.484 ********** 2025-04-14 00:33:34.742485 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:33:34.742724 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:33:34.743557 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:33:34.744189 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:33:34.744462 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:33:34.744975 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:33:34.745321 | orchestrator | ok: [testbed-manager] 2025-04-14 00:33:34.745671 | orchestrator | 2025-04-14 00:33:34.746136 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-04-14 00:33:34.746449 | orchestrator | Monday 14 April 2025 00:33:34 +0000 (0:00:01.419) 0:03:04.904 ********** 2025-04-14 00:33:47.281290 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:33:47.281576 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:33:47.282226 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:33:47.282275 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:33:47.282289 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:33:47.282301 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:33:47.282315 | orchestrator | changed: [testbed-manager] 2025-04-14 00:33:47.282337 | orchestrator | 2025-04-14 00:33:47.282535 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-04-14 00:33:47.282563 | orchestrator | Monday 14 April 2025 00:33:47 +0000 (0:00:12.530) 0:03:17.434 ********** 2025-04-14 00:33:47.681511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-04-14 00:33:47.682958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-04-14 00:33:47.683036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-04-14 00:33:47.683079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-04-14 00:33:47.684665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-04-14 00:33:47.684728 | orchestrator | 2025-04-14 00:33:47.685305 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-04-14 00:33:47.685836 | orchestrator | Monday 14 April 2025 00:33:47 +0000 (0:00:00.409) 0:03:17.844 ********** 2025-04-14 00:33:47.750529 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-14 00:33:47.783849 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:33:47.783987 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-14 00:33:47.816815 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-14 00:33:47.816918 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:33:47.849452 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-04-14 00:33:47.849602 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:33:47.876628 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:33:48.406929 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-14 00:33:48.409204 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-14 00:33:48.409315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-14 00:33:48.410320 | orchestrator | 2025-04-14 00:33:48.411237 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-04-14 00:33:48.412194 | orchestrator | Monday 14 April 2025 00:33:48 +0000 (0:00:00.724) 0:03:18.568 ********** 2025-04-14 00:33:48.467260 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-14 00:33:48.467404 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-14 00:33:48.467804 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-14 00:33:48.511967 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-14 00:33:48.512122 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-14 00:33:48.512222 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-14 00:33:48.512312 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-14 00:33:48.512921 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-14 00:33:48.513200 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-14 00:33:48.547926 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-14 00:33:48.549676 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-14 00:33:48.550093 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-14 00:33:48.550158 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-14 00:33:48.550425 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-14 00:33:48.598460 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-14 00:33:48.598650 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-14 00:33:48.598941 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-14 00:33:48.598974 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-14 00:33:48.599138 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-14 00:33:48.599524 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-14 00:33:48.599794 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-14 00:33:48.600202 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-14 00:33:48.600679 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-14 00:33:48.600831 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-14 00:33:48.602198 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-14 00:33:48.602876 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-04-14 00:33:48.604637 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-14 00:33:48.604980 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-04-14 00:33:48.606179 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-14 00:33:48.606294 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-04-14 00:33:48.606318 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-14 00:33:48.606620 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-14 00:33:48.607087 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-04-14 00:33:48.607414 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-14 00:33:48.607657 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-04-14 00:33:48.609265 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-04-14 00:33:48.628926 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-04-14 00:33:48.629018 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-04-14 00:33:48.629034 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-04-14 00:33:48.629048 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-04-14 00:33:48.629077 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:33:52.007899 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:33:52.008946 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:33:52.009867 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:33:52.010644 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-14 00:33:52.010950 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-14 00:33:52.013820 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-04-14 00:33:52.014549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-14 00:33:52.014831 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-14 00:33:52.015332 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-04-14 00:33:52.016527 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-14 00:33:52.017285 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-14 00:33:52.017578 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-04-14 00:33:52.018180 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-14 00:33:52.018813 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-14 00:33:52.019321 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-04-14 00:33:52.019915 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-14 00:33:52.020548 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-14 00:33:52.020718 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-04-14 00:33:52.021535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-14 00:33:52.021894 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-14 00:33:52.022202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-04-14 00:33:52.022693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-14 00:33:52.023077 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-14 00:33:52.023568 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-04-14 00:33:52.023787 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-14 00:33:52.024247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-14 00:33:52.024960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-04-14 00:33:52.025044 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-14 00:33:52.025337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-14 00:33:52.025867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-04-14 00:33:52.026106 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-14 00:33:52.026437 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-14 00:33:52.026706 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-04-14 00:33:52.027132 | orchestrator | 2025-04-14 00:33:52.027337 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-04-14 00:33:52.027713 | orchestrator | Monday 14 April 2025 00:33:52 +0000 (0:00:03.600) 0:03:22.169 ********** 2025-04-14 00:33:52.593757 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-14 00:33:52.594405 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-14 00:33:52.595464 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-14 00:33:52.597750 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-14 00:33:52.598327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-14 00:33:52.600587 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-14 00:33:52.600706 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-04-14 00:33:52.601845 | orchestrator | 2025-04-14 00:33:52.601904 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-04-14 00:33:52.602290 | orchestrator | Monday 14 April 2025 00:33:52 +0000 (0:00:00.586) 0:03:22.756 ********** 2025-04-14 00:33:52.650190 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-14 00:33:52.677910 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:33:52.758689 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-14 00:33:53.081949 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:33:53.082231 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-14 00:33:53.082557 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:33:53.082601 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-04-14 00:33:53.082943 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:33:53.084599 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-14 00:33:53.087790 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-14 00:33:53.088805 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-04-14 00:33:53.089520 | orchestrator | 2025-04-14 00:33:53.090485 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-04-14 00:33:53.091771 | orchestrator | Monday 14 April 2025 00:33:53 +0000 (0:00:00.488) 0:03:23.244 ********** 2025-04-14 00:33:53.138821 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-14 00:33:53.165485 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:33:53.244301 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-14 00:33:53.246001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-14 00:33:53.635598 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:33:53.635772 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:33:53.636916 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-04-14 00:33:53.637767 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:33:53.640990 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-14 00:33:53.641327 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-14 00:33:53.642590 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-04-14 00:33:53.642821 | orchestrator | 2025-04-14 00:33:53.643893 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-04-14 00:33:53.644048 | orchestrator | Monday 14 April 2025 00:33:53 +0000 (0:00:00.550) 0:03:23.795 ********** 2025-04-14 00:33:53.722737 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:33:53.753264 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:33:53.784084 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:33:53.813823 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:33:53.973196 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:33:53.973363 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:33:53.974651 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:33:53.975905 | orchestrator | 2025-04-14 00:33:53.977639 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-04-14 00:33:53.979062 | orchestrator | Monday 14 April 2025 00:33:53 +0000 (0:00:00.340) 0:03:24.135 ********** 2025-04-14 00:33:59.661685 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:33:59.662824 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:33:59.663911 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:33:59.664463 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:33:59.665659 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:33:59.665764 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:33:59.666845 | orchestrator | ok: [testbed-manager] 2025-04-14 00:33:59.667725 | orchestrator | 2025-04-14 00:33:59.668945 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-04-14 00:33:59.669544 | orchestrator | Monday 14 April 2025 00:33:59 +0000 (0:00:05.687) 0:03:29.822 ********** 2025-04-14 00:33:59.704262 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-04-14 00:33:59.740453 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:33:59.742287 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-04-14 00:33:59.780466 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:33:59.782102 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-04-14 00:33:59.824572 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:33:59.825420 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-04-14 00:33:59.911206 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:33:59.911460 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-04-14 00:33:59.989323 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:33:59.989615 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-04-14 00:33:59.990818 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:33:59.992096 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-04-14 00:33:59.992927 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:33:59.993789 | orchestrator | 2025-04-14 00:33:59.994490 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-04-14 00:33:59.994938 | orchestrator | Monday 14 April 2025 00:33:59 +0000 (0:00:00.330) 0:03:30.153 ********** 2025-04-14 00:34:00.992016 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-04-14 00:34:00.992262 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-04-14 00:34:00.993710 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-04-14 00:34:00.994535 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-04-14 00:34:00.995151 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-04-14 00:34:00.996180 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-04-14 00:34:00.996984 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-04-14 00:34:00.997712 | orchestrator | 2025-04-14 00:34:00.998117 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-04-14 00:34:00.999234 | orchestrator | Monday 14 April 2025 00:34:00 +0000 (0:00:00.999) 0:03:31.153 ********** 2025-04-14 00:34:01.442750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:34:01.443672 | orchestrator | 2025-04-14 00:34:01.443762 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-04-14 00:34:01.443946 | orchestrator | Monday 14 April 2025 00:34:01 +0000 (0:00:00.449) 0:03:31.603 ********** 2025-04-14 00:34:02.692008 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:02.692346 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:02.693140 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:02.694218 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:02.695710 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:02.696483 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:02.697320 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:02.698167 | orchestrator | 2025-04-14 00:34:02.699397 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-04-14 00:34:02.699918 | orchestrator | Monday 14 April 2025 00:34:02 +0000 (0:00:01.251) 0:03:32.854 ********** 2025-04-14 00:34:03.301878 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:03.302639 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:03.303457 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:03.304786 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:03.305564 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:03.306241 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:03.306924 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:03.307443 | orchestrator | 2025-04-14 00:34:03.308177 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-04-14 00:34:03.308733 | orchestrator | Monday 14 April 2025 00:34:03 +0000 (0:00:00.609) 0:03:33.464 ********** 2025-04-14 00:34:03.992970 | orchestrator | changed: [testbed-manager] 2025-04-14 00:34:03.993471 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:34:03.993510 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:34:03.993533 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:34:03.993686 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:34:03.993714 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:34:03.994226 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:34:03.994434 | orchestrator | 2025-04-14 00:34:03.994672 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-04-14 00:34:03.995119 | orchestrator | Monday 14 April 2025 00:34:03 +0000 (0:00:00.691) 0:03:34.156 ********** 2025-04-14 00:34:04.663946 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:04.664236 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:04.665476 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:04.666070 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:04.667215 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:04.668735 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:04.669326 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:04.670320 | orchestrator | 2025-04-14 00:34:04.670930 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-04-14 00:34:04.671480 | orchestrator | Monday 14 April 2025 00:34:04 +0000 (0:00:00.668) 0:03:34.825 ********** 2025-04-14 00:34:05.613884 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744589066.814197, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.614772 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744589075.0684617, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.614834 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744589071.8661528, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.615330 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744589076.621045, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.615957 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744589079.493279, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.616997 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744589069.1051314, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.617806 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1744589067.096734, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.618422 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744589097.3210373, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.619303 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744589006.9224403, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.620075 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744589006.2712095, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.620521 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744589003.2356143, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.621168 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744589000.5306625, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.621701 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744589008.6674478, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.622628 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1744589002.6416864, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 00:34:05.623090 | orchestrator | 2025-04-14 00:34:05.623391 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-04-14 00:34:05.624081 | orchestrator | Monday 14 April 2025 00:34:05 +0000 (0:00:00.950) 0:03:35.776 ********** 2025-04-14 00:34:06.829922 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:34:06.831712 | orchestrator | changed: [testbed-manager] 2025-04-14 00:34:06.831760 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:34:06.831784 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:34:06.832173 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:34:06.833509 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:34:06.833859 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:34:06.834662 | orchestrator | 2025-04-14 00:34:06.835217 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-04-14 00:34:06.835694 | orchestrator | Monday 14 April 2025 00:34:06 +0000 (0:00:01.212) 0:03:36.988 ********** 2025-04-14 00:34:07.927232 | orchestrator | changed: [testbed-manager] 2025-04-14 00:34:07.927592 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:34:07.928459 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:34:07.928508 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:34:07.928567 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:34:07.932339 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:34:07.932753 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:34:07.933083 | orchestrator | 2025-04-14 00:34:07.935823 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-04-14 00:34:07.936141 | orchestrator | Monday 14 April 2025 00:34:07 +0000 (0:00:01.100) 0:03:38.089 ********** 2025-04-14 00:34:08.001122 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:34:08.036942 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:34:08.082208 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:34:08.125040 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:34:08.175636 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:34:08.246512 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:34:08.247185 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:34:08.248508 | orchestrator | 2025-04-14 00:34:08.249508 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-04-14 00:34:08.250383 | orchestrator | Monday 14 April 2025 00:34:08 +0000 (0:00:00.319) 0:03:38.409 ********** 2025-04-14 00:34:09.009464 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:09.009641 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:09.009664 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:09.009685 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:09.010872 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:09.011360 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:09.012063 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:09.012561 | orchestrator | 2025-04-14 00:34:09.013757 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-04-14 00:34:09.014492 | orchestrator | Monday 14 April 2025 00:34:08 +0000 (0:00:00.758) 0:03:39.167 ********** 2025-04-14 00:34:09.419929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:34:09.420493 | orchestrator | 2025-04-14 00:34:09.420539 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-04-14 00:34:09.421203 | orchestrator | Monday 14 April 2025 00:34:09 +0000 (0:00:00.414) 0:03:39.582 ********** 2025-04-14 00:34:16.326289 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:16.327463 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:34:16.327516 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:34:16.328827 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:34:16.329661 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:34:16.330484 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:34:16.331027 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:34:16.331631 | orchestrator | 2025-04-14 00:34:16.332528 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-04-14 00:34:16.333987 | orchestrator | Monday 14 April 2025 00:34:16 +0000 (0:00:06.905) 0:03:46.488 ********** 2025-04-14 00:34:17.435130 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:17.435949 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:17.436322 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:17.437506 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:17.438080 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:17.438949 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:17.439624 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:17.440478 | orchestrator | 2025-04-14 00:34:17.441000 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-04-14 00:34:17.441522 | orchestrator | Monday 14 April 2025 00:34:17 +0000 (0:00:01.108) 0:03:47.596 ********** 2025-04-14 00:34:18.462816 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:18.464001 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:18.464907 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:18.465918 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:18.466539 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:18.467681 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:18.468207 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:18.469067 | orchestrator | 2025-04-14 00:34:18.470069 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-04-14 00:34:18.470911 | orchestrator | Monday 14 April 2025 00:34:18 +0000 (0:00:01.027) 0:03:48.624 ********** 2025-04-14 00:34:18.875559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:34:18.876087 | orchestrator | 2025-04-14 00:34:18.876130 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-04-14 00:34:18.876810 | orchestrator | Monday 14 April 2025 00:34:18 +0000 (0:00:00.414) 0:03:49.039 ********** 2025-04-14 00:34:27.299199 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:34:27.299604 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:34:27.299959 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:34:27.301830 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:34:27.302397 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:34:27.302826 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:34:27.304182 | orchestrator | changed: [testbed-manager] 2025-04-14 00:34:27.306402 | orchestrator | 2025-04-14 00:34:27.306842 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-04-14 00:34:27.307456 | orchestrator | Monday 14 April 2025 00:34:27 +0000 (0:00:08.420) 0:03:57.459 ********** 2025-04-14 00:34:27.912125 | orchestrator | changed: [testbed-manager] 2025-04-14 00:34:27.912671 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:34:27.913806 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:34:27.915678 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:34:27.916098 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:34:27.917318 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:34:27.918671 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:34:27.919134 | orchestrator | 2025-04-14 00:34:27.920565 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-04-14 00:34:27.921164 | orchestrator | Monday 14 April 2025 00:34:27 +0000 (0:00:00.616) 0:03:58.076 ********** 2025-04-14 00:34:29.011988 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:34:29.012575 | orchestrator | changed: [testbed-manager] 2025-04-14 00:34:29.012613 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:34:29.013247 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:34:29.014381 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:34:29.014797 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:34:29.016340 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:34:29.017261 | orchestrator | 2025-04-14 00:34:29.017317 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-04-14 00:34:29.017874 | orchestrator | Monday 14 April 2025 00:34:29 +0000 (0:00:01.095) 0:03:59.171 ********** 2025-04-14 00:34:30.026602 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:34:30.027648 | orchestrator | changed: [testbed-manager] 2025-04-14 00:34:30.029131 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:34:30.029175 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:34:30.030361 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:34:30.030884 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:34:30.033687 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:34:30.033843 | orchestrator | 2025-04-14 00:34:30.033870 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-04-14 00:34:30.033891 | orchestrator | Monday 14 April 2025 00:34:30 +0000 (0:00:01.017) 0:04:00.189 ********** 2025-04-14 00:34:30.160723 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:30.202139 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:30.261132 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:30.304477 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:30.386955 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:30.387505 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:30.388409 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:30.388516 | orchestrator | 2025-04-14 00:34:30.389791 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-04-14 00:34:30.390441 | orchestrator | Monday 14 April 2025 00:34:30 +0000 (0:00:00.360) 0:04:00.549 ********** 2025-04-14 00:34:30.495190 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:30.534885 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:30.579081 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:30.609916 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:30.690188 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:30.690705 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:30.691727 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:30.693090 | orchestrator | 2025-04-14 00:34:30.693355 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-04-14 00:34:30.694129 | orchestrator | Monday 14 April 2025 00:34:30 +0000 (0:00:00.305) 0:04:00.854 ********** 2025-04-14 00:34:30.816292 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:30.861697 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:30.900425 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:30.939613 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:31.032201 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:31.032926 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:31.033816 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:31.034455 | orchestrator | 2025-04-14 00:34:31.035052 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-04-14 00:34:31.036056 | orchestrator | Monday 14 April 2025 00:34:31 +0000 (0:00:00.340) 0:04:01.195 ********** 2025-04-14 00:34:36.685195 | orchestrator | ok: [testbed-manager] 2025-04-14 00:34:36.685827 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:34:36.686829 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:34:36.691589 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:34:36.691696 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:34:36.692815 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:34:36.693572 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:34:36.694220 | orchestrator | 2025-04-14 00:34:36.694666 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-04-14 00:34:36.695086 | orchestrator | Monday 14 April 2025 00:34:36 +0000 (0:00:05.652) 0:04:06.847 ********** 2025-04-14 00:34:37.120917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:34:37.124166 | orchestrator | 2025-04-14 00:34:37.125723 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-04-14 00:34:37.126318 | orchestrator | Monday 14 April 2025 00:34:37 +0000 (0:00:00.434) 0:04:07.282 ********** 2025-04-14 00:34:37.200957 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-04-14 00:34:37.202645 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-04-14 00:34:37.242518 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-04-14 00:34:37.243217 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:34:37.244414 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-04-14 00:34:37.244447 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-04-14 00:34:37.284030 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:34:37.284170 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-04-14 00:34:37.284196 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-04-14 00:34:37.327787 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-04-14 00:34:37.328289 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:34:37.328665 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-04-14 00:34:37.329266 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-04-14 00:34:37.364894 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:34:37.453514 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:34:37.454679 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-04-14 00:34:37.455805 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-04-14 00:34:37.455890 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:34:37.456507 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-04-14 00:34:37.458948 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-04-14 00:34:37.459858 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:34:37.459900 | orchestrator | 2025-04-14 00:34:37.459916 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-04-14 00:34:37.459937 | orchestrator | Monday 14 April 2025 00:34:37 +0000 (0:00:00.333) 0:04:07.616 ********** 2025-04-14 00:34:37.893010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:34:37.894073 | orchestrator | 2025-04-14 00:34:37.895503 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-04-14 00:34:37.896472 | orchestrator | Monday 14 April 2025 00:34:37 +0000 (0:00:00.439) 0:04:08.056 ********** 2025-04-14 00:34:37.950418 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-04-14 00:34:37.993191 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:34:38.046222 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-04-14 00:34:38.087959 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-04-14 00:34:38.088080 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:34:38.134655 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-04-14 00:34:38.136206 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:34:38.137227 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-04-14 00:34:38.174217 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:34:38.265884 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-04-14 00:34:38.266165 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:34:38.268474 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:34:38.268966 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-04-14 00:34:38.269781 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:34:38.270182 | orchestrator | 2025-04-14 00:34:38.270663 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-04-14 00:34:38.271179 | orchestrator | Monday 14 April 2025 00:34:38 +0000 (0:00:00.372) 0:04:08.428 ********** 2025-04-14 00:34:38.724093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:34:38.724359 | orchestrator | 2025-04-14 00:34:38.724450 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-04-14 00:34:38.725259 | orchestrator | Monday 14 April 2025 00:34:38 +0000 (0:00:00.457) 0:04:08.886 ********** 2025-04-14 00:35:11.377860 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:11.378008 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:11.380246 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:11.380306 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:11.384461 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:11.384516 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:11.386183 | orchestrator | changed: [testbed-manager] 2025-04-14 00:35:11.386202 | orchestrator | 2025-04-14 00:35:11.387443 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-04-14 00:35:18.436442 | orchestrator | Monday 14 April 2025 00:35:11 +0000 (0:00:32.650) 0:04:41.537 ********** 2025-04-14 00:35:18.436604 | orchestrator | changed: [testbed-manager] 2025-04-14 00:35:18.437041 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:18.437173 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:18.437979 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:18.440297 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:18.441292 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:18.442419 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:18.443541 | orchestrator | 2025-04-14 00:35:18.444237 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-04-14 00:35:18.445018 | orchestrator | Monday 14 April 2025 00:35:18 +0000 (0:00:07.061) 0:04:48.598 ********** 2025-04-14 00:35:25.721764 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:25.722307 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:25.723170 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:25.723918 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:25.725381 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:25.725836 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:25.726515 | orchestrator | changed: [testbed-manager] 2025-04-14 00:35:25.726825 | orchestrator | 2025-04-14 00:35:25.727608 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-04-14 00:35:25.728290 | orchestrator | Monday 14 April 2025 00:35:25 +0000 (0:00:07.281) 0:04:55.879 ********** 2025-04-14 00:35:27.223930 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:35:27.224408 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:27.225431 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:35:27.226825 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:35:27.227164 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:35:27.227800 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:35:27.227827 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:35:27.228192 | orchestrator | 2025-04-14 00:35:27.229040 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-04-14 00:35:27.229643 | orchestrator | Monday 14 April 2025 00:35:27 +0000 (0:00:01.506) 0:04:57.385 ********** 2025-04-14 00:35:32.942308 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:32.943682 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:32.945034 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:32.945086 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:32.946277 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:32.946703 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:32.947299 | orchestrator | changed: [testbed-manager] 2025-04-14 00:35:32.949133 | orchestrator | 2025-04-14 00:35:32.949907 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-04-14 00:35:32.951516 | orchestrator | Monday 14 April 2025 00:35:32 +0000 (0:00:05.718) 0:05:03.104 ********** 2025-04-14 00:35:33.413983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:35:33.414325 | orchestrator | 2025-04-14 00:35:33.414458 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-04-14 00:35:33.415079 | orchestrator | Monday 14 April 2025 00:35:33 +0000 (0:00:00.472) 0:05:03.577 ********** 2025-04-14 00:35:34.166816 | orchestrator | changed: [testbed-manager] 2025-04-14 00:35:34.167504 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:34.167553 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:34.167967 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:34.169043 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:34.170000 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:34.170823 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:34.171220 | orchestrator | 2025-04-14 00:35:34.172475 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-04-14 00:35:35.733241 | orchestrator | Monday 14 April 2025 00:35:34 +0000 (0:00:00.748) 0:05:04.325 ********** 2025-04-14 00:35:35.733440 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:35:35.734168 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:35:35.735647 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:35:35.736663 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:35:35.738278 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:35:35.739074 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:35.740149 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:35:35.740680 | orchestrator | 2025-04-14 00:35:35.741758 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-04-14 00:35:35.742960 | orchestrator | Monday 14 April 2025 00:35:35 +0000 (0:00:01.569) 0:05:05.895 ********** 2025-04-14 00:35:36.543295 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:36.543604 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:36.544195 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:36.545854 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:36.547043 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:36.547705 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:36.548512 | orchestrator | changed: [testbed-manager] 2025-04-14 00:35:36.549470 | orchestrator | 2025-04-14 00:35:36.550143 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-04-14 00:35:36.550546 | orchestrator | Monday 14 April 2025 00:35:36 +0000 (0:00:00.808) 0:05:06.703 ********** 2025-04-14 00:35:36.625595 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:35:36.673910 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:35:36.728612 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:35:36.760518 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:35:36.800004 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:35:36.858005 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:35:36.859660 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:35:36.861039 | orchestrator | 2025-04-14 00:35:36.864114 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-04-14 00:35:36.922241 | orchestrator | Monday 14 April 2025 00:35:36 +0000 (0:00:00.318) 0:05:07.022 ********** 2025-04-14 00:35:36.922396 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:35:37.020594 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:35:37.051832 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:35:37.089591 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:35:37.260798 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:35:37.261687 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:35:37.262823 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:35:37.264940 | orchestrator | 2025-04-14 00:35:37.265901 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-04-14 00:35:37.265935 | orchestrator | Monday 14 April 2025 00:35:37 +0000 (0:00:00.401) 0:05:07.423 ********** 2025-04-14 00:35:37.393870 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:37.429061 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:35:37.466291 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:35:37.500070 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:35:37.578763 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:35:37.580966 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:35:37.583528 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:35:37.585006 | orchestrator | 2025-04-14 00:35:37.585043 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-04-14 00:35:37.585861 | orchestrator | Monday 14 April 2025 00:35:37 +0000 (0:00:00.317) 0:05:07.741 ********** 2025-04-14 00:35:37.678576 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:35:37.712500 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:35:37.749087 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:35:37.808254 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:35:37.885584 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:35:37.885768 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:35:37.887108 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:35:37.887791 | orchestrator | 2025-04-14 00:35:37.888572 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-04-14 00:35:37.889825 | orchestrator | Monday 14 April 2025 00:35:37 +0000 (0:00:00.306) 0:05:08.048 ********** 2025-04-14 00:35:38.009741 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:38.050501 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:35:38.094978 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:35:38.138488 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:35:38.225289 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:35:38.227616 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:35:38.228938 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:35:38.229758 | orchestrator | 2025-04-14 00:35:38.230517 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-04-14 00:35:38.231897 | orchestrator | Monday 14 April 2025 00:35:38 +0000 (0:00:00.340) 0:05:08.388 ********** 2025-04-14 00:35:38.302988 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:35:38.331624 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:35:38.384943 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:35:38.434083 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:35:38.474316 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:35:38.540422 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:35:38.540782 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:35:38.541476 | orchestrator | 2025-04-14 00:35:38.542166 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-04-14 00:35:38.542660 | orchestrator | Monday 14 April 2025 00:35:38 +0000 (0:00:00.315) 0:05:08.703 ********** 2025-04-14 00:35:38.614744 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:35:38.647635 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:35:38.683092 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:35:38.717543 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:35:38.751122 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:35:38.815747 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:35:38.815904 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:35:38.816033 | orchestrator | 2025-04-14 00:35:38.816440 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-04-14 00:35:38.816548 | orchestrator | Monday 14 April 2025 00:35:38 +0000 (0:00:00.276) 0:05:08.980 ********** 2025-04-14 00:35:39.401626 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:35:39.404747 | orchestrator | 2025-04-14 00:35:39.404860 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-04-14 00:35:39.404895 | orchestrator | Monday 14 April 2025 00:35:39 +0000 (0:00:00.582) 0:05:09.562 ********** 2025-04-14 00:35:40.157719 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:40.158550 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:35:40.161489 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:35:40.163411 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:35:40.163443 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:35:40.163458 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:35:40.163477 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:35:40.164405 | orchestrator | 2025-04-14 00:35:40.164994 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-04-14 00:35:40.165504 | orchestrator | Monday 14 April 2025 00:35:40 +0000 (0:00:00.754) 0:05:10.317 ********** 2025-04-14 00:35:43.038613 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:35:43.040165 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:35:43.041566 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:35:43.042866 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:35:43.043522 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:35:43.044837 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:35:43.045890 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:43.046516 | orchestrator | 2025-04-14 00:35:43.047096 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-04-14 00:35:43.047409 | orchestrator | Monday 14 April 2025 00:35:43 +0000 (0:00:02.883) 0:05:13.201 ********** 2025-04-14 00:35:43.107898 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-04-14 00:35:43.108284 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-04-14 00:35:43.201570 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-04-14 00:35:43.202679 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-04-14 00:35:43.203606 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-04-14 00:35:43.204346 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-04-14 00:35:43.281703 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:35:43.286407 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-04-14 00:35:43.286606 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-04-14 00:35:43.358238 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:35:43.358711 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-04-14 00:35:43.359098 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-04-14 00:35:43.359471 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-04-14 00:35:43.359804 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-04-14 00:35:43.428704 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:35:43.429487 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-04-14 00:35:43.509537 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:35:43.510884 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-04-14 00:35:43.512392 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-04-14 00:35:43.513823 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-04-14 00:35:43.514901 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-04-14 00:35:43.515736 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-04-14 00:35:43.708768 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:35:43.709137 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:35:43.710489 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-04-14 00:35:43.710681 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-04-14 00:35:43.712520 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-04-14 00:35:43.712586 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:35:43.712606 | orchestrator | 2025-04-14 00:35:43.712620 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-04-14 00:35:43.713374 | orchestrator | Monday 14 April 2025 00:35:43 +0000 (0:00:00.667) 0:05:13.868 ********** 2025-04-14 00:35:49.118181 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:49.120335 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:49.121056 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:49.121090 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:49.121113 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:49.121513 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:49.122124 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:49.122631 | orchestrator | 2025-04-14 00:35:49.123378 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-04-14 00:35:49.124890 | orchestrator | Monday 14 April 2025 00:35:49 +0000 (0:00:05.409) 0:05:19.278 ********** 2025-04-14 00:35:50.115675 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:50.116335 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:50.117704 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:50.118609 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:50.120023 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:50.120442 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:50.122608 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:50.123204 | orchestrator | 2025-04-14 00:35:50.124230 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-04-14 00:35:50.124750 | orchestrator | Monday 14 April 2025 00:35:50 +0000 (0:00:00.999) 0:05:20.278 ********** 2025-04-14 00:35:56.045157 | orchestrator | ok: [testbed-manager] 2025-04-14 00:35:56.047317 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:56.049708 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:56.049739 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:56.049760 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:56.051088 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:56.052091 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:56.053214 | orchestrator | 2025-04-14 00:35:56.053843 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-04-14 00:35:56.054671 | orchestrator | Monday 14 April 2025 00:35:56 +0000 (0:00:05.925) 0:05:26.204 ********** 2025-04-14 00:35:59.022713 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:35:59.023275 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:35:59.023310 | orchestrator | changed: [testbed-manager] 2025-04-14 00:35:59.023333 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:35:59.023457 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:35:59.024101 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:35:59.024496 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:35:59.028335 | orchestrator | 2025-04-14 00:36:00.492729 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-04-14 00:36:00.492853 | orchestrator | Monday 14 April 2025 00:35:59 +0000 (0:00:02.978) 0:05:29.182 ********** 2025-04-14 00:36:00.492889 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:00.493067 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:00.493496 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:00.493528 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:00.494483 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:00.497273 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:01.898700 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:01.898850 | orchestrator | 2025-04-14 00:36:01.898873 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-04-14 00:36:01.898890 | orchestrator | Monday 14 April 2025 00:36:00 +0000 (0:00:01.472) 0:05:30.654 ********** 2025-04-14 00:36:01.898920 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:01.899415 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:01.899453 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:01.900165 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:01.901517 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:01.902197 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:01.902540 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:01.905217 | orchestrator | 2025-04-14 00:36:01.905631 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-04-14 00:36:01.906194 | orchestrator | Monday 14 April 2025 00:36:01 +0000 (0:00:01.402) 0:05:32.057 ********** 2025-04-14 00:36:02.106817 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:36:02.172250 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:36:02.247738 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:36:02.315129 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:36:02.522605 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:36:02.523056 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:36:02.524416 | orchestrator | changed: [testbed-manager] 2025-04-14 00:36:02.524790 | orchestrator | 2025-04-14 00:36:02.526141 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-04-14 00:36:02.527505 | orchestrator | Monday 14 April 2025 00:36:02 +0000 (0:00:00.626) 0:05:32.683 ********** 2025-04-14 00:36:11.739701 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:11.739899 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:11.739932 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:11.741217 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:11.742759 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:11.744449 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:11.745978 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:11.747433 | orchestrator | 2025-04-14 00:36:11.748040 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-04-14 00:36:11.748078 | orchestrator | Monday 14 April 2025 00:36:11 +0000 (0:00:09.217) 0:05:41.901 ********** 2025-04-14 00:36:12.627083 | orchestrator | changed: [testbed-manager] 2025-04-14 00:36:12.627318 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:12.628708 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:12.630325 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:12.631328 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:12.632430 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:12.633747 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:12.634282 | orchestrator | 2025-04-14 00:36:12.635575 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-04-14 00:36:12.636721 | orchestrator | Monday 14 April 2025 00:36:12 +0000 (0:00:00.886) 0:05:42.787 ********** 2025-04-14 00:36:24.342685 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:24.342885 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:24.342906 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:24.342918 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:24.342930 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:24.342946 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:24.343890 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:24.345976 | orchestrator | 2025-04-14 00:36:24.346761 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-04-14 00:36:24.347880 | orchestrator | Monday 14 April 2025 00:36:24 +0000 (0:00:11.711) 0:05:54.499 ********** 2025-04-14 00:36:36.209813 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:36.210527 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:36.210569 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:36.210585 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:36.210607 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:36.212763 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:36.213928 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:36.215080 | orchestrator | 2025-04-14 00:36:36.216040 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-04-14 00:36:36.217026 | orchestrator | Monday 14 April 2025 00:36:36 +0000 (0:00:11.870) 0:06:06.369 ********** 2025-04-14 00:36:37.319389 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-04-14 00:36:37.320493 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-04-14 00:36:37.321405 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-04-14 00:36:37.322363 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-04-14 00:36:37.323061 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-04-14 00:36:37.324349 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-04-14 00:36:37.325498 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-04-14 00:36:37.326213 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-04-14 00:36:37.326527 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-04-14 00:36:37.327019 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-04-14 00:36:37.327411 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-04-14 00:36:37.328079 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-04-14 00:36:37.328488 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-04-14 00:36:37.329209 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-04-14 00:36:37.329631 | orchestrator | 2025-04-14 00:36:37.330129 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-04-14 00:36:37.330511 | orchestrator | Monday 14 April 2025 00:36:37 +0000 (0:00:01.111) 0:06:07.481 ********** 2025-04-14 00:36:37.442947 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:36:37.498761 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:36:37.553601 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:36:37.611675 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:36:37.670823 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:36:37.792800 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:36:37.796840 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:36:37.797055 | orchestrator | 2025-04-14 00:36:37.797086 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-04-14 00:36:37.797111 | orchestrator | Monday 14 April 2025 00:36:37 +0000 (0:00:00.474) 0:06:07.956 ********** 2025-04-14 00:36:41.210515 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:41.211620 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:41.212222 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:41.212948 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:41.213782 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:41.214214 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:41.214431 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:41.214776 | orchestrator | 2025-04-14 00:36:41.215200 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-04-14 00:36:41.215583 | orchestrator | Monday 14 April 2025 00:36:41 +0000 (0:00:03.416) 0:06:11.372 ********** 2025-04-14 00:36:41.344591 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:36:41.407593 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:36:41.684827 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:36:41.759702 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:36:41.828908 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:36:41.949734 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:36:41.949902 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:36:41.950824 | orchestrator | 2025-04-14 00:36:41.953185 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-04-14 00:36:41.953944 | orchestrator | Monday 14 April 2025 00:36:41 +0000 (0:00:00.739) 0:06:12.112 ********** 2025-04-14 00:36:42.021602 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-04-14 00:36:42.022415 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-04-14 00:36:42.113210 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:36:42.113668 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-04-14 00:36:42.114409 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-04-14 00:36:42.185169 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:36:42.186546 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-04-14 00:36:42.187843 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-04-14 00:36:42.271725 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:36:42.271923 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-04-14 00:36:42.273378 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-04-14 00:36:42.347918 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:36:42.349661 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-04-14 00:36:42.349701 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-04-14 00:36:42.426566 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:36:42.426779 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-04-14 00:36:42.427323 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-04-14 00:36:42.535900 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:36:42.536294 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-04-14 00:36:42.537800 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-04-14 00:36:42.538367 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:36:42.539506 | orchestrator | 2025-04-14 00:36:42.542144 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-04-14 00:36:42.542831 | orchestrator | Monday 14 April 2025 00:36:42 +0000 (0:00:00.585) 0:06:12.697 ********** 2025-04-14 00:36:42.681397 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:36:42.752958 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:36:42.826442 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:36:42.897509 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:36:42.964452 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:36:43.081165 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:36:43.081581 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:36:43.081631 | orchestrator | 2025-04-14 00:36:43.084518 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-04-14 00:36:43.087503 | orchestrator | Monday 14 April 2025 00:36:43 +0000 (0:00:00.545) 0:06:13.242 ********** 2025-04-14 00:36:43.230588 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:36:43.295130 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:36:43.365115 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:36:43.429622 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:36:43.493282 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:36:43.606823 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:36:43.607500 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:36:43.608162 | orchestrator | 2025-04-14 00:36:43.608788 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-04-14 00:36:43.609657 | orchestrator | Monday 14 April 2025 00:36:43 +0000 (0:00:00.527) 0:06:13.770 ********** 2025-04-14 00:36:43.737279 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:36:43.813810 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:36:43.879946 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:36:43.944823 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:36:44.015680 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:36:44.139804 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:36:44.140378 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:36:44.141150 | orchestrator | 2025-04-14 00:36:44.142415 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-04-14 00:36:44.142737 | orchestrator | Monday 14 April 2025 00:36:44 +0000 (0:00:00.531) 0:06:14.302 ********** 2025-04-14 00:36:49.726839 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:49.727177 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:49.727439 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:49.727468 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:49.727486 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:49.727629 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:49.727721 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:49.728160 | orchestrator | 2025-04-14 00:36:49.728364 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-04-14 00:36:49.728552 | orchestrator | Monday 14 April 2025 00:36:49 +0000 (0:00:05.586) 0:06:19.888 ********** 2025-04-14 00:36:50.596310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:36:50.597110 | orchestrator | 2025-04-14 00:36:50.598153 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-04-14 00:36:50.598624 | orchestrator | Monday 14 April 2025 00:36:50 +0000 (0:00:00.871) 0:06:20.760 ********** 2025-04-14 00:36:51.408159 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:51.409651 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:51.410505 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:51.410562 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:51.414167 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:51.414471 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:51.414499 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:51.414514 | orchestrator | 2025-04-14 00:36:51.414530 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-04-14 00:36:51.414552 | orchestrator | Monday 14 April 2025 00:36:51 +0000 (0:00:00.811) 0:06:21.572 ********** 2025-04-14 00:36:52.453263 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:52.453600 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:52.454410 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:52.454451 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:52.454512 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:52.454660 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:52.455121 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:52.458879 | orchestrator | 2025-04-14 00:36:52.459182 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-04-14 00:36:52.459215 | orchestrator | Monday 14 April 2025 00:36:52 +0000 (0:00:01.043) 0:06:22.615 ********** 2025-04-14 00:36:53.748924 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:53.749471 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:53.749550 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:53.749609 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:53.750519 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:53.752755 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:53.753632 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:53.754460 | orchestrator | 2025-04-14 00:36:53.755582 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-04-14 00:36:53.756778 | orchestrator | Monday 14 April 2025 00:36:53 +0000 (0:00:01.295) 0:06:23.910 ********** 2025-04-14 00:36:53.892373 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:36:55.119646 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:36:55.119840 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:36:55.120650 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:36:55.121474 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:36:55.122625 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:36:55.123526 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:36:55.124899 | orchestrator | 2025-04-14 00:36:55.125529 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-04-14 00:36:55.126459 | orchestrator | Monday 14 April 2025 00:36:55 +0000 (0:00:01.371) 0:06:25.281 ********** 2025-04-14 00:36:56.457972 | orchestrator | ok: [testbed-manager] 2025-04-14 00:36:56.460121 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:56.461605 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:56.462746 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:56.463526 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:56.466273 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:56.467222 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:56.467259 | orchestrator | 2025-04-14 00:36:56.467292 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-04-14 00:36:56.467657 | orchestrator | Monday 14 April 2025 00:36:56 +0000 (0:00:01.336) 0:06:26.618 ********** 2025-04-14 00:36:57.820732 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:36:57.821078 | orchestrator | changed: [testbed-manager] 2025-04-14 00:36:57.821386 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:36:57.821888 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:36:57.822647 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:36:57.823089 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:36:57.823246 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:36:57.824071 | orchestrator | 2025-04-14 00:36:57.824593 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-04-14 00:36:57.825375 | orchestrator | Monday 14 April 2025 00:36:57 +0000 (0:00:01.360) 0:06:27.978 ********** 2025-04-14 00:36:58.939834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:36:58.940974 | orchestrator | 2025-04-14 00:36:58.941787 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-04-14 00:36:58.946243 | orchestrator | Monday 14 April 2025 00:36:58 +0000 (0:00:01.119) 0:06:29.098 ********** 2025-04-14 00:37:00.343062 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:00.344002 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:00.344048 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:00.345056 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:00.346178 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:00.346656 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:00.347164 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:00.347752 | orchestrator | 2025-04-14 00:37:00.348473 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-04-14 00:37:00.348773 | orchestrator | Monday 14 April 2025 00:37:00 +0000 (0:00:01.404) 0:06:30.503 ********** 2025-04-14 00:37:01.466241 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:01.467144 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:01.468260 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:01.469209 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:01.470756 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:01.471164 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:01.471605 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:01.473119 | orchestrator | 2025-04-14 00:37:01.473686 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-04-14 00:37:01.474449 | orchestrator | Monday 14 April 2025 00:37:01 +0000 (0:00:01.124) 0:06:31.628 ********** 2025-04-14 00:37:02.636248 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:02.636483 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:02.636759 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:02.638177 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:02.638271 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:02.638296 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:02.638529 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:02.638905 | orchestrator | 2025-04-14 00:37:02.639205 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-04-14 00:37:02.639742 | orchestrator | Monday 14 April 2025 00:37:02 +0000 (0:00:01.166) 0:06:32.795 ********** 2025-04-14 00:37:04.004208 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:04.004779 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:04.005257 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:04.012630 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:04.012836 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:04.012862 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:04.012882 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:04.015757 | orchestrator | 2025-04-14 00:37:04.016412 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-04-14 00:37:04.016875 | orchestrator | Monday 14 April 2025 00:37:03 +0000 (0:00:01.372) 0:06:34.167 ********** 2025-04-14 00:37:05.340676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:37:05.340887 | orchestrator | 2025-04-14 00:37:05.341029 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-14 00:37:05.342460 | orchestrator | Monday 14 April 2025 00:37:05 +0000 (0:00:01.004) 0:06:35.172 ********** 2025-04-14 00:37:05.342738 | orchestrator | 2025-04-14 00:37:05.343554 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-14 00:37:05.344218 | orchestrator | Monday 14 April 2025 00:37:05 +0000 (0:00:00.042) 0:06:35.214 ********** 2025-04-14 00:37:05.344639 | orchestrator | 2025-04-14 00:37:05.345354 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-14 00:37:05.345646 | orchestrator | Monday 14 April 2025 00:37:05 +0000 (0:00:00.040) 0:06:35.255 ********** 2025-04-14 00:37:05.346257 | orchestrator | 2025-04-14 00:37:05.346583 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-14 00:37:05.347097 | orchestrator | Monday 14 April 2025 00:37:05 +0000 (0:00:00.051) 0:06:35.306 ********** 2025-04-14 00:37:05.347463 | orchestrator | 2025-04-14 00:37:05.347819 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-14 00:37:05.348367 | orchestrator | Monday 14 April 2025 00:37:05 +0000 (0:00:00.045) 0:06:35.351 ********** 2025-04-14 00:37:05.350090 | orchestrator | 2025-04-14 00:37:05.350865 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-14 00:37:05.351575 | orchestrator | Monday 14 April 2025 00:37:05 +0000 (0:00:00.042) 0:06:35.394 ********** 2025-04-14 00:37:05.352494 | orchestrator | 2025-04-14 00:37:05.352920 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-04-14 00:37:05.353733 | orchestrator | Monday 14 April 2025 00:37:05 +0000 (0:00:00.050) 0:06:35.444 ********** 2025-04-14 00:37:05.354120 | orchestrator | 2025-04-14 00:37:05.354940 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-04-14 00:37:05.355215 | orchestrator | Monday 14 April 2025 00:37:05 +0000 (0:00:00.057) 0:06:35.501 ********** 2025-04-14 00:37:06.427652 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:06.427822 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:06.427845 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:06.427866 | orchestrator | 2025-04-14 00:37:06.429035 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-04-14 00:37:06.430489 | orchestrator | Monday 14 April 2025 00:37:06 +0000 (0:00:01.086) 0:06:36.587 ********** 2025-04-14 00:37:08.311126 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:37:08.311310 | orchestrator | changed: [testbed-manager] 2025-04-14 00:37:08.313171 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:37:08.315349 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:37:08.316527 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:37:08.318155 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:37:08.318531 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:37:08.319495 | orchestrator | 2025-04-14 00:37:08.319877 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-04-14 00:37:08.320472 | orchestrator | Monday 14 April 2025 00:37:08 +0000 (0:00:01.884) 0:06:38.471 ********** 2025-04-14 00:37:09.332842 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:37:09.333714 | orchestrator | changed: [testbed-manager] 2025-04-14 00:37:09.334144 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:37:09.334717 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:37:09.335513 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:37:09.337199 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:37:09.337760 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:37:09.338559 | orchestrator | 2025-04-14 00:37:09.339380 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-04-14 00:37:09.340289 | orchestrator | Monday 14 April 2025 00:37:09 +0000 (0:00:01.024) 0:06:39.496 ********** 2025-04-14 00:37:09.470863 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:37:11.312936 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:37:11.313120 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:37:11.313152 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:37:11.313541 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:37:11.314387 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:37:11.315101 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:37:11.315139 | orchestrator | 2025-04-14 00:37:11.315729 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-04-14 00:37:11.316754 | orchestrator | Monday 14 April 2025 00:37:11 +0000 (0:00:01.976) 0:06:41.473 ********** 2025-04-14 00:37:11.433009 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:37:11.433179 | orchestrator | 2025-04-14 00:37:11.435374 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-04-14 00:37:12.428550 | orchestrator | Monday 14 April 2025 00:37:11 +0000 (0:00:00.121) 0:06:41.595 ********** 2025-04-14 00:37:12.428648 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:37:12.429574 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:37:12.430319 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:12.431542 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:37:12.432055 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:37:12.433803 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:37:12.436348 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:37:12.436653 | orchestrator | 2025-04-14 00:37:12.437458 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-04-14 00:37:12.438628 | orchestrator | Monday 14 April 2025 00:37:12 +0000 (0:00:00.995) 0:06:42.590 ********** 2025-04-14 00:37:12.567888 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:37:12.632505 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:37:12.698100 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:37:12.959684 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:37:13.030681 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:37:13.162090 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:37:13.164399 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:37:13.164601 | orchestrator | 2025-04-14 00:37:13.165074 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-04-14 00:37:13.165471 | orchestrator | Monday 14 April 2025 00:37:13 +0000 (0:00:00.731) 0:06:43.322 ********** 2025-04-14 00:37:14.108278 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:37:14.108636 | orchestrator | 2025-04-14 00:37:14.109502 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-04-14 00:37:14.112378 | orchestrator | Monday 14 April 2025 00:37:14 +0000 (0:00:00.948) 0:06:44.271 ********** 2025-04-14 00:37:14.966801 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:14.967621 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:14.978864 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:17.625623 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:17.625757 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:17.625777 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:17.625791 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:17.625806 | orchestrator | 2025-04-14 00:37:17.625823 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-04-14 00:37:17.625838 | orchestrator | Monday 14 April 2025 00:37:14 +0000 (0:00:00.857) 0:06:45.128 ********** 2025-04-14 00:37:17.625869 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-04-14 00:37:17.627078 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-04-14 00:37:17.628106 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-04-14 00:37:17.631608 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-04-14 00:37:17.632963 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-04-14 00:37:17.633069 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-04-14 00:37:17.634783 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-04-14 00:37:17.635647 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-04-14 00:37:17.636449 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-04-14 00:37:17.637223 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-04-14 00:37:17.637786 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-04-14 00:37:17.638478 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-04-14 00:37:17.639538 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-04-14 00:37:17.640181 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-04-14 00:37:17.640761 | orchestrator | 2025-04-14 00:37:17.642185 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-04-14 00:37:17.642759 | orchestrator | Monday 14 April 2025 00:37:17 +0000 (0:00:02.658) 0:06:47.787 ********** 2025-04-14 00:37:17.770811 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:37:17.832890 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:37:17.902129 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:37:17.990095 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:37:18.055945 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:37:18.152024 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:37:18.152411 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:37:18.152453 | orchestrator | 2025-04-14 00:37:18.152783 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-04-14 00:37:18.153288 | orchestrator | Monday 14 April 2025 00:37:18 +0000 (0:00:00.527) 0:06:48.314 ********** 2025-04-14 00:37:19.026448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:37:19.026620 | orchestrator | 2025-04-14 00:37:19.033544 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-04-14 00:37:19.036128 | orchestrator | Monday 14 April 2025 00:37:19 +0000 (0:00:00.874) 0:06:49.189 ********** 2025-04-14 00:37:19.433513 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:19.850310 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:19.850526 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:19.850552 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:19.850573 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:19.851006 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:19.851593 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:19.852307 | orchestrator | 2025-04-14 00:37:19.852928 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-04-14 00:37:19.853574 | orchestrator | Monday 14 April 2025 00:37:19 +0000 (0:00:00.820) 0:06:50.010 ********** 2025-04-14 00:37:20.272602 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:20.858074 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:20.859246 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:20.860950 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:20.861729 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:20.862666 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:20.863249 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:20.864447 | orchestrator | 2025-04-14 00:37:20.865070 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-04-14 00:37:20.865772 | orchestrator | Monday 14 April 2025 00:37:20 +0000 (0:00:01.010) 0:06:51.020 ********** 2025-04-14 00:37:21.007582 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:37:21.081434 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:37:21.148615 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:37:21.215978 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:37:21.286730 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:37:21.408992 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:37:21.409565 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:37:21.410888 | orchestrator | 2025-04-14 00:37:21.412143 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-04-14 00:37:21.415690 | orchestrator | Monday 14 April 2025 00:37:21 +0000 (0:00:00.550) 0:06:51.570 ********** 2025-04-14 00:37:22.761937 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:22.762368 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:22.763067 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:22.765204 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:22.766956 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:22.767873 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:22.768474 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:22.769240 | orchestrator | 2025-04-14 00:37:22.770584 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-04-14 00:37:22.771349 | orchestrator | Monday 14 April 2025 00:37:22 +0000 (0:00:01.352) 0:06:52.922 ********** 2025-04-14 00:37:22.893999 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:37:22.958898 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:37:23.031837 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:37:23.097131 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:37:23.161000 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:37:23.272485 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:37:23.272630 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:37:23.273643 | orchestrator | 2025-04-14 00:37:23.274704 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-04-14 00:37:23.278181 | orchestrator | Monday 14 April 2025 00:37:23 +0000 (0:00:00.513) 0:06:53.436 ********** 2025-04-14 00:37:25.174635 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:25.177458 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:25.178161 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:25.178229 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:25.178247 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:25.178271 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:25.179198 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:25.180978 | orchestrator | 2025-04-14 00:37:26.519829 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-04-14 00:37:26.520011 | orchestrator | Monday 14 April 2025 00:37:25 +0000 (0:00:01.894) 0:06:55.331 ********** 2025-04-14 00:37:26.520072 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:26.520201 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:37:26.520694 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:37:26.520727 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:37:26.522261 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:37:26.522392 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:37:26.522415 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:37:26.522870 | orchestrator | 2025-04-14 00:37:26.523284 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-04-14 00:37:26.524286 | orchestrator | Monday 14 April 2025 00:37:26 +0000 (0:00:01.350) 0:06:56.681 ********** 2025-04-14 00:37:28.236823 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:28.237014 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:37:28.238793 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:37:28.239931 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:37:28.241704 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:37:28.242658 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:37:28.243515 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:37:28.244784 | orchestrator | 2025-04-14 00:37:28.245628 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-04-14 00:37:28.246256 | orchestrator | Monday 14 April 2025 00:37:28 +0000 (0:00:01.717) 0:06:58.399 ********** 2025-04-14 00:37:29.857572 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:29.858286 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:37:29.861301 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:37:29.862114 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:37:29.862761 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:37:29.864277 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:37:29.864681 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:37:29.865093 | orchestrator | 2025-04-14 00:37:29.866814 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-14 00:37:29.867567 | orchestrator | Monday 14 April 2025 00:37:29 +0000 (0:00:01.617) 0:07:00.017 ********** 2025-04-14 00:37:30.430132 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:30.496178 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:30.943016 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:30.944283 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:30.944687 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:30.945366 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:30.945741 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:30.946758 | orchestrator | 2025-04-14 00:37:30.947958 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-14 00:37:30.949108 | orchestrator | Monday 14 April 2025 00:37:30 +0000 (0:00:01.088) 0:07:01.105 ********** 2025-04-14 00:37:31.077078 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:37:31.162559 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:37:31.247051 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:37:31.317714 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:37:31.402746 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:37:31.812872 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:37:31.813112 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:37:31.814657 | orchestrator | 2025-04-14 00:37:31.815274 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-04-14 00:37:31.816344 | orchestrator | Monday 14 April 2025 00:37:31 +0000 (0:00:00.868) 0:07:01.974 ********** 2025-04-14 00:37:31.955784 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:37:32.038933 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:37:32.125553 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:37:32.198183 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:37:32.261679 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:37:32.389634 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:37:32.390553 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:37:32.393963 | orchestrator | 2025-04-14 00:37:32.394683 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-04-14 00:37:32.395058 | orchestrator | Monday 14 April 2025 00:37:32 +0000 (0:00:00.577) 0:07:02.552 ********** 2025-04-14 00:37:32.518069 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:32.591296 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:32.657057 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:32.726151 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:32.798167 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:32.918494 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:32.920059 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:32.921425 | orchestrator | 2025-04-14 00:37:32.923846 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-04-14 00:37:32.924263 | orchestrator | Monday 14 April 2025 00:37:32 +0000 (0:00:00.526) 0:07:03.079 ********** 2025-04-14 00:37:33.251787 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:33.316376 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:33.381455 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:33.459752 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:33.529196 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:33.631414 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:33.632112 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:33.636177 | orchestrator | 2025-04-14 00:37:33.769956 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-04-14 00:37:33.770124 | orchestrator | Monday 14 April 2025 00:37:33 +0000 (0:00:00.713) 0:07:03.792 ********** 2025-04-14 00:37:33.770160 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:33.842833 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:33.915874 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:33.985655 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:34.062094 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:34.184948 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:34.185251 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:34.186136 | orchestrator | 2025-04-14 00:37:34.186384 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-04-14 00:37:34.187151 | orchestrator | Monday 14 April 2025 00:37:34 +0000 (0:00:00.555) 0:07:04.347 ********** 2025-04-14 00:37:39.883902 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:39.884431 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:39.884506 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:39.885240 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:39.886407 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:39.887378 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:39.887476 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:39.889605 | orchestrator | 2025-04-14 00:37:39.889744 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-04-14 00:37:39.890544 | orchestrator | Monday 14 April 2025 00:37:39 +0000 (0:00:05.697) 0:07:10.045 ********** 2025-04-14 00:37:40.020837 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:37:40.082792 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:37:40.162844 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:37:40.235255 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:37:40.299902 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:37:40.434177 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:37:40.434318 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:37:40.436264 | orchestrator | 2025-04-14 00:37:40.436510 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-04-14 00:37:40.437767 | orchestrator | Monday 14 April 2025 00:37:40 +0000 (0:00:00.549) 0:07:10.594 ********** 2025-04-14 00:37:41.504490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:37:41.504677 | orchestrator | 2025-04-14 00:37:41.505562 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-04-14 00:37:41.507375 | orchestrator | Monday 14 April 2025 00:37:41 +0000 (0:00:01.069) 0:07:11.664 ********** 2025-04-14 00:37:43.267862 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:43.268269 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:43.268312 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:43.269403 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:43.269870 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:43.270706 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:43.271597 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:43.272439 | orchestrator | 2025-04-14 00:37:43.273031 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-04-14 00:37:43.273732 | orchestrator | Monday 14 April 2025 00:37:43 +0000 (0:00:01.763) 0:07:13.428 ********** 2025-04-14 00:37:44.381858 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:44.382169 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:44.382845 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:44.383904 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:44.384401 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:44.384834 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:44.385881 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:44.386138 | orchestrator | 2025-04-14 00:37:44.386555 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-04-14 00:37:44.387277 | orchestrator | Monday 14 April 2025 00:37:44 +0000 (0:00:01.116) 0:07:14.544 ********** 2025-04-14 00:37:44.826648 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:45.264583 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:45.264761 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:45.264802 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:45.265068 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:45.265097 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:45.265116 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:45.265521 | orchestrator | 2025-04-14 00:37:45.265552 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-04-14 00:37:45.266012 | orchestrator | Monday 14 April 2025 00:37:45 +0000 (0:00:00.874) 0:07:15.419 ********** 2025-04-14 00:37:47.224637 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-14 00:37:47.224998 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-14 00:37:47.226509 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-14 00:37:47.228726 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-14 00:37:47.229049 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-14 00:37:47.231162 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-14 00:37:47.231642 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-04-14 00:37:47.232488 | orchestrator | 2025-04-14 00:37:47.233577 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-04-14 00:37:48.059634 | orchestrator | Monday 14 April 2025 00:37:47 +0000 (0:00:01.965) 0:07:17.385 ********** 2025-04-14 00:37:48.059801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:37:48.060216 | orchestrator | 2025-04-14 00:37:48.061301 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-04-14 00:37:48.061960 | orchestrator | Monday 14 April 2025 00:37:48 +0000 (0:00:00.836) 0:07:18.221 ********** 2025-04-14 00:37:57.239810 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:37:57.240129 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:37:57.241626 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:37:57.243447 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:37:57.244557 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:37:57.245398 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:37:57.246125 | orchestrator | changed: [testbed-manager] 2025-04-14 00:37:57.247027 | orchestrator | 2025-04-14 00:37:57.247920 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-04-14 00:37:57.248384 | orchestrator | Monday 14 April 2025 00:37:57 +0000 (0:00:09.179) 0:07:27.400 ********** 2025-04-14 00:37:59.039838 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:37:59.040229 | orchestrator | ok: [testbed-manager] 2025-04-14 00:37:59.041046 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:37:59.041648 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:37:59.042196 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:37:59.043105 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:37:59.043460 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:37:59.044057 | orchestrator | 2025-04-14 00:37:59.044571 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-04-14 00:37:59.045203 | orchestrator | Monday 14 April 2025 00:37:59 +0000 (0:00:01.800) 0:07:29.201 ********** 2025-04-14 00:38:00.299406 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:00.299746 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:00.300743 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:00.301286 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:00.302943 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:00.305752 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:01.724266 | orchestrator | 2025-04-14 00:38:01.724443 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-04-14 00:38:01.724463 | orchestrator | Monday 14 April 2025 00:38:00 +0000 (0:00:01.258) 0:07:30.460 ********** 2025-04-14 00:38:01.724493 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:38:01.724564 | orchestrator | changed: [testbed-manager] 2025-04-14 00:38:01.725860 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:38:01.728573 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:38:01.729392 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:38:01.729417 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:38:01.729432 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:38:01.729450 | orchestrator | 2025-04-14 00:38:01.729849 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-04-14 00:38:01.731181 | orchestrator | 2025-04-14 00:38:01.731303 | orchestrator | TASK [Include hardening role] ************************************************** 2025-04-14 00:38:01.732309 | orchestrator | Monday 14 April 2025 00:38:01 +0000 (0:00:01.427) 0:07:31.887 ********** 2025-04-14 00:38:01.852756 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:38:01.912497 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:38:01.980298 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:38:02.048459 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:38:02.111172 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:38:02.235768 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:38:02.236721 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:38:02.237465 | orchestrator | 2025-04-14 00:38:02.238840 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-04-14 00:38:02.239640 | orchestrator | 2025-04-14 00:38:02.240480 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-04-14 00:38:02.241145 | orchestrator | Monday 14 April 2025 00:38:02 +0000 (0:00:00.510) 0:07:32.398 ********** 2025-04-14 00:38:03.535689 | orchestrator | changed: [testbed-manager] 2025-04-14 00:38:03.536385 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:38:03.537501 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:38:03.540573 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:38:03.542234 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:38:03.542378 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:38:03.542415 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:38:03.542787 | orchestrator | 2025-04-14 00:38:03.543555 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-04-14 00:38:03.544094 | orchestrator | Monday 14 April 2025 00:38:03 +0000 (0:00:01.295) 0:07:33.693 ********** 2025-04-14 00:38:04.968538 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:04.969217 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:04.969685 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:04.975787 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:04.978519 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:04.980217 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:04.981596 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:04.982146 | orchestrator | 2025-04-14 00:38:04.983163 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-04-14 00:38:04.985576 | orchestrator | Monday 14 April 2025 00:38:04 +0000 (0:00:01.435) 0:07:35.129 ********** 2025-04-14 00:38:05.137747 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:38:05.449039 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:38:05.525564 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:38:05.618831 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:38:05.696820 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:38:06.122423 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:38:06.123523 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:38:06.124838 | orchestrator | 2025-04-14 00:38:06.127268 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-04-14 00:38:06.128668 | orchestrator | Monday 14 April 2025 00:38:06 +0000 (0:00:01.154) 0:07:36.284 ********** 2025-04-14 00:38:07.426734 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:38:07.427816 | orchestrator | changed: [testbed-manager] 2025-04-14 00:38:07.429511 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:38:07.429566 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:38:07.430433 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:38:07.431184 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:38:07.431791 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:38:07.432593 | orchestrator | 2025-04-14 00:38:07.433109 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-04-14 00:38:07.434282 | orchestrator | 2025-04-14 00:38:07.434613 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-04-14 00:38:07.435033 | orchestrator | Monday 14 April 2025 00:38:07 +0000 (0:00:01.303) 0:07:37.587 ********** 2025-04-14 00:38:08.297500 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:38:08.298133 | orchestrator | 2025-04-14 00:38:08.298956 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-14 00:38:08.300067 | orchestrator | Monday 14 April 2025 00:38:08 +0000 (0:00:00.868) 0:07:38.456 ********** 2025-04-14 00:38:08.695506 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:09.313069 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:09.313574 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:09.314408 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:09.315053 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:09.315682 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:09.316008 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:09.316480 | orchestrator | 2025-04-14 00:38:09.316786 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-14 00:38:09.318706 | orchestrator | Monday 14 April 2025 00:38:09 +0000 (0:00:01.020) 0:07:39.477 ********** 2025-04-14 00:38:10.434356 | orchestrator | changed: [testbed-manager] 2025-04-14 00:38:10.437511 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:38:10.439166 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:38:10.439191 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:38:10.440393 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:38:10.440717 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:38:10.441991 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:38:10.442829 | orchestrator | 2025-04-14 00:38:10.444112 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-04-14 00:38:10.444412 | orchestrator | Monday 14 April 2025 00:38:10 +0000 (0:00:01.117) 0:07:40.594 ********** 2025-04-14 00:38:11.467586 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:38:11.470628 | orchestrator | 2025-04-14 00:38:12.287555 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-04-14 00:38:12.287678 | orchestrator | Monday 14 April 2025 00:38:11 +0000 (0:00:01.033) 0:07:41.628 ********** 2025-04-14 00:38:12.287715 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:12.288307 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:12.289279 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:12.290435 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:12.291501 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:12.292762 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:12.293069 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:12.294141 | orchestrator | 2025-04-14 00:38:12.294959 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-04-14 00:38:12.295744 | orchestrator | Monday 14 April 2025 00:38:12 +0000 (0:00:00.819) 0:07:42.447 ********** 2025-04-14 00:38:13.409817 | orchestrator | changed: [testbed-manager] 2025-04-14 00:38:13.410776 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:38:13.410823 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:38:13.411193 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:38:13.411915 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:38:13.412659 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:38:13.413820 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:38:13.414470 | orchestrator | 2025-04-14 00:38:13.415547 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:38:13.415593 | orchestrator | 2025-04-14 00:38:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:38:13.416368 | orchestrator | 2025-04-14 00:38:13 | INFO  | Please wait and do not abort execution. 2025-04-14 00:38:13.416414 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-04-14 00:38:13.416795 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-14 00:38:13.417305 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-14 00:38:13.417780 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-14 00:38:13.418189 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-14 00:38:13.418529 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-14 00:38:13.419149 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-04-14 00:38:13.419641 | orchestrator | 2025-04-14 00:38:13.420229 | orchestrator | Monday 14 April 2025 00:38:13 +0000 (0:00:01.124) 0:07:43.572 ********** 2025-04-14 00:38:13.420965 | orchestrator | =============================================================================== 2025-04-14 00:38:13.421196 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.49s 2025-04-14 00:38:13.421746 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.44s 2025-04-14 00:38:13.422359 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.65s 2025-04-14 00:38:13.422591 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.69s 2025-04-14 00:38:13.423159 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.57s 2025-04-14 00:38:13.423245 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.53s 2025-04-14 00:38:13.424018 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.87s 2025-04-14 00:38:13.424547 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 11.71s 2025-04-14 00:38:13.424596 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.22s 2025-04-14 00:38:13.424937 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.18s 2025-04-14 00:38:13.425482 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.42s 2025-04-14 00:38:13.425938 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.28s 2025-04-14 00:38:13.426922 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.06s 2025-04-14 00:38:13.427703 | orchestrator | osism.services.rng : Install rng package -------------------------------- 6.91s 2025-04-14 00:38:13.428579 | orchestrator | osism.services.docker : Add repository ---------------------------------- 5.93s 2025-04-14 00:38:13.429069 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.72s 2025-04-14 00:38:13.429749 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.70s 2025-04-14 00:38:13.430139 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.69s 2025-04-14 00:38:13.430504 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.65s 2025-04-14 00:38:13.430623 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 5.59s 2025-04-14 00:38:14.169170 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-04-14 00:38:16.155020 | orchestrator | + osism apply network 2025-04-14 00:38:16.155163 | orchestrator | 2025-04-14 00:38:16 | INFO  | Task 61d325db-0bb9-4d1d-b4c7-7a57563d2025 (network) was prepared for execution. 2025-04-14 00:38:19.806232 | orchestrator | 2025-04-14 00:38:16 | INFO  | It takes a moment until task 61d325db-0bb9-4d1d-b4c7-7a57563d2025 (network) has been started and output is visible here. 2025-04-14 00:38:19.806379 | orchestrator | 2025-04-14 00:38:19.807039 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-04-14 00:38:19.809506 | orchestrator | 2025-04-14 00:38:19.809587 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-04-14 00:38:19.810584 | orchestrator | Monday 14 April 2025 00:38:19 +0000 (0:00:00.237) 0:00:00.237 ********** 2025-04-14 00:38:19.959129 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:20.035847 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:20.113806 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:20.192518 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:20.271791 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:20.520932 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:20.521100 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:20.521977 | orchestrator | 2025-04-14 00:38:20.522180 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-04-14 00:38:20.523397 | orchestrator | Monday 14 April 2025 00:38:20 +0000 (0:00:00.716) 0:00:00.953 ********** 2025-04-14 00:38:21.745817 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:38:21.746255 | orchestrator | 2025-04-14 00:38:21.747050 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-04-14 00:38:21.750299 | orchestrator | Monday 14 April 2025 00:38:21 +0000 (0:00:01.222) 0:00:02.175 ********** 2025-04-14 00:38:23.593144 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:23.596577 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:23.598158 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:23.598215 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:23.598241 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:23.598447 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:23.599169 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:23.600882 | orchestrator | 2025-04-14 00:38:25.297850 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-04-14 00:38:25.298003 | orchestrator | Monday 14 April 2025 00:38:23 +0000 (0:00:01.849) 0:00:04.025 ********** 2025-04-14 00:38:25.298096 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:25.299141 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:25.302733 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:25.303516 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:25.303578 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:25.303617 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:25.304476 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:25.305173 | orchestrator | 2025-04-14 00:38:25.306229 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-04-14 00:38:25.306999 | orchestrator | Monday 14 April 2025 00:38:25 +0000 (0:00:01.702) 0:00:05.728 ********** 2025-04-14 00:38:25.838079 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-04-14 00:38:25.843206 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-04-14 00:38:26.433681 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-04-14 00:38:26.433819 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-04-14 00:38:26.434600 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-04-14 00:38:26.435992 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-04-14 00:38:26.439508 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-04-14 00:38:28.263091 | orchestrator | 2025-04-14 00:38:28.263219 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-04-14 00:38:28.263239 | orchestrator | Monday 14 April 2025 00:38:26 +0000 (0:00:01.137) 0:00:06.865 ********** 2025-04-14 00:38:28.263272 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-14 00:38:28.263592 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-14 00:38:28.265202 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 00:38:28.266726 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 00:38:28.267514 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-14 00:38:28.268284 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-14 00:38:28.269300 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-14 00:38:28.269769 | orchestrator | 2025-04-14 00:38:28.271510 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-04-14 00:38:29.914303 | orchestrator | Monday 14 April 2025 00:38:28 +0000 (0:00:01.831) 0:00:08.696 ********** 2025-04-14 00:38:29.914537 | orchestrator | changed: [testbed-manager] 2025-04-14 00:38:29.916458 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:38:29.916742 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:38:29.917640 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:38:29.919543 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:38:29.921696 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:38:29.922509 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:38:29.923241 | orchestrator | 2025-04-14 00:38:29.924176 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-04-14 00:38:29.926294 | orchestrator | Monday 14 April 2025 00:38:29 +0000 (0:00:01.646) 0:00:10.343 ********** 2025-04-14 00:38:30.533179 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 00:38:30.979804 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 00:38:30.980028 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-14 00:38:30.980938 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-14 00:38:30.982368 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-14 00:38:30.986164 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-14 00:38:30.986303 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-14 00:38:30.986347 | orchestrator | 2025-04-14 00:38:30.986364 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-04-14 00:38:30.986385 | orchestrator | Monday 14 April 2025 00:38:30 +0000 (0:00:01.070) 0:00:11.413 ********** 2025-04-14 00:38:31.474301 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:31.568042 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:31.803811 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:32.216105 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:32.216877 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:32.218582 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:32.219453 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:32.220901 | orchestrator | 2025-04-14 00:38:32.221758 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-04-14 00:38:32.222574 | orchestrator | Monday 14 April 2025 00:38:32 +0000 (0:00:01.231) 0:00:12.645 ********** 2025-04-14 00:38:32.379916 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:38:32.460718 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:38:32.551806 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:38:32.631180 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:38:32.711560 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:38:33.040115 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:38:33.041798 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:38:33.043259 | orchestrator | 2025-04-14 00:38:33.044857 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-04-14 00:38:33.045611 | orchestrator | Monday 14 April 2025 00:38:33 +0000 (0:00:00.823) 0:00:13.469 ********** 2025-04-14 00:38:34.906372 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:34.910085 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:34.912338 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:34.912366 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:34.913444 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:34.913479 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:34.913496 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:34.913601 | orchestrator | 2025-04-14 00:38:34.916842 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-04-14 00:38:34.917162 | orchestrator | Monday 14 April 2025 00:38:34 +0000 (0:00:01.859) 0:00:15.329 ********** 2025-04-14 00:38:35.826565 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-04-14 00:38:36.937085 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-14 00:38:36.937544 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-14 00:38:36.938252 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-14 00:38:36.938842 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-14 00:38:36.939407 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-14 00:38:36.940025 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-14 00:38:36.940449 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-04-14 00:38:36.940800 | orchestrator | 2025-04-14 00:38:36.941534 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-04-14 00:38:36.941928 | orchestrator | Monday 14 April 2025 00:38:36 +0000 (0:00:02.038) 0:00:17.367 ********** 2025-04-14 00:38:38.535392 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:38.536020 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:38:38.536135 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:38:38.537643 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:38:38.538837 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:38:38.539123 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:38:38.539561 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:38:38.540052 | orchestrator | 2025-04-14 00:38:38.540757 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-04-14 00:38:38.541155 | orchestrator | Monday 14 April 2025 00:38:38 +0000 (0:00:01.600) 0:00:18.967 ********** 2025-04-14 00:38:39.968558 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:38:39.969292 | orchestrator | 2025-04-14 00:38:39.970175 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-04-14 00:38:39.971522 | orchestrator | Monday 14 April 2025 00:38:39 +0000 (0:00:01.430) 0:00:20.398 ********** 2025-04-14 00:38:40.547794 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:40.973841 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:40.974325 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:40.974363 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:40.975914 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:40.976082 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:40.977143 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:40.977914 | orchestrator | 2025-04-14 00:38:40.978257 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-04-14 00:38:40.978930 | orchestrator | Monday 14 April 2025 00:38:40 +0000 (0:00:01.008) 0:00:21.406 ********** 2025-04-14 00:38:41.133215 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:41.220483 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:38:41.502746 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:38:41.608928 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:38:41.713139 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:38:41.869865 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:38:41.870222 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:38:41.870840 | orchestrator | 2025-04-14 00:38:41.871111 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-04-14 00:38:41.871956 | orchestrator | Monday 14 April 2025 00:38:41 +0000 (0:00:00.892) 0:00:22.298 ********** 2025-04-14 00:38:42.376238 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-14 00:38:42.376691 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-04-14 00:38:42.377544 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-14 00:38:42.378524 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-04-14 00:38:42.467913 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-14 00:38:42.468744 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-04-14 00:38:42.927381 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-14 00:38:42.927553 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-04-14 00:38:42.927581 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-14 00:38:42.928065 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-04-14 00:38:42.929098 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-14 00:38:42.929445 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-04-14 00:38:42.929749 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-04-14 00:38:42.929997 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-04-14 00:38:42.930355 | orchestrator | 2025-04-14 00:38:42.931353 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-04-14 00:38:43.311718 | orchestrator | Monday 14 April 2025 00:38:42 +0000 (0:00:01.062) 0:00:23.360 ********** 2025-04-14 00:38:43.311899 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:38:43.390660 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:38:43.480983 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:38:43.564660 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:38:43.649793 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:38:44.849161 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:38:44.849467 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:38:44.850223 | orchestrator | 2025-04-14 00:38:44.853393 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-04-14 00:38:45.018127 | orchestrator | Monday 14 April 2025 00:38:44 +0000 (0:00:01.917) 0:00:25.278 ********** 2025-04-14 00:38:45.018266 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:38:45.112272 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:38:45.382532 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:38:45.466907 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:38:45.551449 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:38:45.593718 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:38:45.594277 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:38:45.595482 | orchestrator | 2025-04-14 00:38:45.597274 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:38:45.597539 | orchestrator | 2025-04-14 00:38:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:38:45.597653 | orchestrator | 2025-04-14 00:38:45 | INFO  | Please wait and do not abort execution. 2025-04-14 00:38:45.599139 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:38:45.601428 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:38:45.602479 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:38:45.602540 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:38:45.603354 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:38:45.604042 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:38:45.604987 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:38:45.605453 | orchestrator | 2025-04-14 00:38:45.605961 | orchestrator | Monday 14 April 2025 00:38:45 +0000 (0:00:00.748) 0:00:26.026 ********** 2025-04-14 00:38:45.606934 | orchestrator | =============================================================================== 2025-04-14 00:38:45.607993 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 2.04s 2025-04-14 00:38:45.608627 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.92s 2025-04-14 00:38:45.609302 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.86s 2025-04-14 00:38:45.609701 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.85s 2025-04-14 00:38:45.610350 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.83s 2025-04-14 00:38:45.610718 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.70s 2025-04-14 00:38:45.611196 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.65s 2025-04-14 00:38:45.611594 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.60s 2025-04-14 00:38:45.612031 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.43s 2025-04-14 00:38:45.612677 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.23s 2025-04-14 00:38:45.612840 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-04-14 00:38:45.613230 | orchestrator | osism.commons.network : Create required directories --------------------- 1.14s 2025-04-14 00:38:45.613706 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.07s 2025-04-14 00:38:45.614260 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.06s 2025-04-14 00:38:45.614329 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2025-04-14 00:38:45.614640 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.89s 2025-04-14 00:38:45.614806 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.82s 2025-04-14 00:38:45.615223 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.75s 2025-04-14 00:38:45.615404 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.72s 2025-04-14 00:38:46.203593 | orchestrator | + osism apply wireguard 2025-04-14 00:38:47.692143 | orchestrator | 2025-04-14 00:38:47 | INFO  | Task 02eb4a46-01f2-4e5b-a574-7dd10d7bc5b5 (wireguard) was prepared for execution. 2025-04-14 00:38:50.996667 | orchestrator | 2025-04-14 00:38:47 | INFO  | It takes a moment until task 02eb4a46-01f2-4e5b-a574-7dd10d7bc5b5 (wireguard) has been started and output is visible here. 2025-04-14 00:38:50.996818 | orchestrator | 2025-04-14 00:38:50.996904 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-04-14 00:38:50.997341 | orchestrator | 2025-04-14 00:38:51.000642 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-04-14 00:38:51.003112 | orchestrator | Monday 14 April 2025 00:38:50 +0000 (0:00:00.196) 0:00:00.196 ********** 2025-04-14 00:38:52.544966 | orchestrator | ok: [testbed-manager] 2025-04-14 00:38:52.545359 | orchestrator | 2025-04-14 00:38:52.547244 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-04-14 00:38:59.453378 | orchestrator | Monday 14 April 2025 00:38:52 +0000 (0:00:01.551) 0:00:01.748 ********** 2025-04-14 00:38:59.453509 | orchestrator | changed: [testbed-manager] 2025-04-14 00:38:59.454075 | orchestrator | 2025-04-14 00:38:59.455144 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-04-14 00:38:59.457534 | orchestrator | Monday 14 April 2025 00:38:59 +0000 (0:00:06.906) 0:00:08.654 ********** 2025-04-14 00:39:00.007149 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:00.007429 | orchestrator | 2025-04-14 00:39:00.007600 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-04-14 00:39:00.008222 | orchestrator | Monday 14 April 2025 00:39:00 +0000 (0:00:00.556) 0:00:09.211 ********** 2025-04-14 00:39:00.470429 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:00.470665 | orchestrator | 2025-04-14 00:39:00.472219 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-04-14 00:39:00.472346 | orchestrator | Monday 14 April 2025 00:39:00 +0000 (0:00:00.463) 0:00:09.675 ********** 2025-04-14 00:39:01.046095 | orchestrator | ok: [testbed-manager] 2025-04-14 00:39:01.046918 | orchestrator | 2025-04-14 00:39:01.048999 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-04-14 00:39:01.049542 | orchestrator | Monday 14 April 2025 00:39:01 +0000 (0:00:00.574) 0:00:10.249 ********** 2025-04-14 00:39:01.656887 | orchestrator | ok: [testbed-manager] 2025-04-14 00:39:01.657549 | orchestrator | 2025-04-14 00:39:01.661015 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-04-14 00:39:02.089678 | orchestrator | Monday 14 April 2025 00:39:01 +0000 (0:00:00.611) 0:00:10.860 ********** 2025-04-14 00:39:02.089806 | orchestrator | ok: [testbed-manager] 2025-04-14 00:39:02.090694 | orchestrator | 2025-04-14 00:39:02.092401 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-04-14 00:39:02.092654 | orchestrator | Monday 14 April 2025 00:39:02 +0000 (0:00:00.433) 0:00:11.293 ********** 2025-04-14 00:39:03.324587 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:03.325199 | orchestrator | 2025-04-14 00:39:03.326570 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-04-14 00:39:04.338681 | orchestrator | Monday 14 April 2025 00:39:03 +0000 (0:00:01.233) 0:00:12.527 ********** 2025-04-14 00:39:04.338814 | orchestrator | changed: [testbed-manager] => (item=None) 2025-04-14 00:39:04.340458 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:04.341171 | orchestrator | 2025-04-14 00:39:04.342344 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-04-14 00:39:04.343061 | orchestrator | Monday 14 April 2025 00:39:04 +0000 (0:00:01.014) 0:00:13.541 ********** 2025-04-14 00:39:06.212626 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:06.214098 | orchestrator | 2025-04-14 00:39:06.215461 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-04-14 00:39:06.216606 | orchestrator | Monday 14 April 2025 00:39:06 +0000 (0:00:01.874) 0:00:15.415 ********** 2025-04-14 00:39:07.185225 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:07.185739 | orchestrator | 2025-04-14 00:39:07.185816 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:39:07.185881 | orchestrator | 2025-04-14 00:39:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:39:07.186107 | orchestrator | 2025-04-14 00:39:07 | INFO  | Please wait and do not abort execution. 2025-04-14 00:39:07.186154 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:39:07.186265 | orchestrator | 2025-04-14 00:39:07.186707 | orchestrator | Monday 14 April 2025 00:39:07 +0000 (0:00:00.973) 0:00:16.389 ********** 2025-04-14 00:39:07.186857 | orchestrator | =============================================================================== 2025-04-14 00:39:07.186900 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.91s 2025-04-14 00:39:07.187415 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.87s 2025-04-14 00:39:07.187655 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2025-04-14 00:39:07.187911 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.23s 2025-04-14 00:39:07.188387 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2025-04-14 00:39:07.188641 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.97s 2025-04-14 00:39:07.189963 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.61s 2025-04-14 00:39:07.190143 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.57s 2025-04-14 00:39:07.788515 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-04-14 00:39:07.788632 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-04-14 00:39:07.788667 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-04-14 00:39:07.788711 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-04-14 00:39:07.825949 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-04-14 00:39:07.911184 | orchestrator | Dload Upload Total Spent Left Speed 2025-04-14 00:39:07.911394 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 175 0 --:--:-- --:--:-- --:--:-- 174 100 15 100 15 0 0 175 0 --:--:-- --:--:-- --:--:-- 174 2025-04-14 00:39:07.927618 | orchestrator | + osism apply --environment custom workarounds 2025-04-14 00:39:09.419824 | orchestrator | 2025-04-14 00:39:09 | INFO  | Trying to run play workarounds in environment custom 2025-04-14 00:39:09.469662 | orchestrator | 2025-04-14 00:39:09 | INFO  | Task 1c6ce0fc-bf55-4431-a8d8-61d46e42b6f2 (workarounds) was prepared for execution. 2025-04-14 00:39:12.687916 | orchestrator | 2025-04-14 00:39:09 | INFO  | It takes a moment until task 1c6ce0fc-bf55-4431-a8d8-61d46e42b6f2 (workarounds) has been started and output is visible here. 2025-04-14 00:39:12.688045 | orchestrator | 2025-04-14 00:39:12.688237 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:39:12.689231 | orchestrator | 2025-04-14 00:39:12.691151 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-04-14 00:39:12.858160 | orchestrator | Monday 14 April 2025 00:39:12 +0000 (0:00:00.148) 0:00:00.148 ********** 2025-04-14 00:39:12.858344 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-04-14 00:39:12.945112 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-04-14 00:39:13.030380 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-04-14 00:39:13.118114 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-04-14 00:39:13.201711 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-04-14 00:39:13.474787 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-04-14 00:39:13.475143 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-04-14 00:39:13.475185 | orchestrator | 2025-04-14 00:39:13.475872 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-04-14 00:39:13.476247 | orchestrator | 2025-04-14 00:39:13.476726 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-14 00:39:13.477438 | orchestrator | Monday 14 April 2025 00:39:13 +0000 (0:00:00.786) 0:00:00.934 ********** 2025-04-14 00:39:16.442710 | orchestrator | ok: [testbed-manager] 2025-04-14 00:39:16.443730 | orchestrator | 2025-04-14 00:39:16.447664 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-04-14 00:39:16.448431 | orchestrator | 2025-04-14 00:39:16.451806 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-04-14 00:39:16.453876 | orchestrator | Monday 14 April 2025 00:39:16 +0000 (0:00:02.965) 0:00:03.899 ********** 2025-04-14 00:39:18.244583 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:39:18.244772 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:39:18.245995 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:39:18.249041 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:39:18.249903 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:39:18.249930 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:39:18.249945 | orchestrator | 2025-04-14 00:39:18.249968 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-04-14 00:39:18.250287 | orchestrator | 2025-04-14 00:39:18.250347 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-04-14 00:39:18.251641 | orchestrator | Monday 14 April 2025 00:39:18 +0000 (0:00:01.804) 0:00:05.703 ********** 2025-04-14 00:39:19.703890 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-14 00:39:19.707158 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-14 00:39:19.707680 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-14 00:39:19.707711 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-14 00:39:19.708425 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-14 00:39:19.708974 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-04-14 00:39:19.709485 | orchestrator | 2025-04-14 00:39:19.710273 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-04-14 00:39:19.712792 | orchestrator | Monday 14 April 2025 00:39:19 +0000 (0:00:01.458) 0:00:07.162 ********** 2025-04-14 00:39:23.522893 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:39:23.524131 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:39:23.524214 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:39:23.526585 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:39:23.527897 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:39:23.529000 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:39:23.530147 | orchestrator | 2025-04-14 00:39:23.531135 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-04-14 00:39:23.531977 | orchestrator | Monday 14 April 2025 00:39:23 +0000 (0:00:03.822) 0:00:10.984 ********** 2025-04-14 00:39:23.668836 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:39:23.746834 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:39:23.825820 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:39:24.067544 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:39:24.217578 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:39:24.218266 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:39:24.219379 | orchestrator | 2025-04-14 00:39:24.220182 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-04-14 00:39:24.223732 | orchestrator | 2025-04-14 00:39:24.224415 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-04-14 00:39:24.225119 | orchestrator | Monday 14 April 2025 00:39:24 +0000 (0:00:00.692) 0:00:11.677 ********** 2025-04-14 00:39:25.896530 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:25.899185 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:39:25.901129 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:39:25.901913 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:39:25.902760 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:39:25.903852 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:39:25.904239 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:39:25.905034 | orchestrator | 2025-04-14 00:39:25.905654 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-04-14 00:39:25.906426 | orchestrator | Monday 14 April 2025 00:39:25 +0000 (0:00:01.679) 0:00:13.357 ********** 2025-04-14 00:39:27.513392 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:27.513625 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:39:27.516021 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:39:27.516651 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:39:27.518888 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:39:27.520102 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:39:27.521333 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:39:27.522341 | orchestrator | 2025-04-14 00:39:27.523910 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-04-14 00:39:27.524794 | orchestrator | Monday 14 April 2025 00:39:27 +0000 (0:00:01.613) 0:00:14.970 ********** 2025-04-14 00:39:29.100814 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:39:29.104400 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:39:29.105205 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:39:29.105238 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:39:29.105258 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:39:29.107633 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:39:29.108690 | orchestrator | ok: [testbed-manager] 2025-04-14 00:39:29.110659 | orchestrator | 2025-04-14 00:39:29.111141 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-04-14 00:39:29.111179 | orchestrator | Monday 14 April 2025 00:39:29 +0000 (0:00:01.591) 0:00:16.562 ********** 2025-04-14 00:39:30.918713 | orchestrator | changed: [testbed-manager] 2025-04-14 00:39:30.918932 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:39:30.919364 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:39:30.920275 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:39:30.920667 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:39:30.921481 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:39:30.921929 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:39:30.922644 | orchestrator | 2025-04-14 00:39:30.923372 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-04-14 00:39:30.924155 | orchestrator | Monday 14 April 2025 00:39:30 +0000 (0:00:01.818) 0:00:18.380 ********** 2025-04-14 00:39:31.091810 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:39:31.171460 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:39:31.254334 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:39:31.326728 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:39:31.598474 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:39:31.743556 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:39:31.744452 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:39:31.745419 | orchestrator | 2025-04-14 00:39:31.748922 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-04-14 00:39:34.062361 | orchestrator | 2025-04-14 00:39:34.062519 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-04-14 00:39:34.062542 | orchestrator | Monday 14 April 2025 00:39:31 +0000 (0:00:00.823) 0:00:19.204 ********** 2025-04-14 00:39:34.062575 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:39:34.062648 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:39:34.062666 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:39:34.062685 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:39:34.063226 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:39:34.063806 | orchestrator | ok: [testbed-manager] 2025-04-14 00:39:34.064687 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:39:34.065870 | orchestrator | 2025-04-14 00:39:34.066615 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:39:34.067147 | orchestrator | 2025-04-14 00:39:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:39:34.067769 | orchestrator | 2025-04-14 00:39:34 | INFO  | Please wait and do not abort execution. 2025-04-14 00:39:34.068524 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:39:34.071656 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:34.071991 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:34.072490 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:34.072880 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:34.073531 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:34.073702 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:34.074126 | orchestrator | 2025-04-14 00:39:34.074585 | orchestrator | Monday 14 April 2025 00:39:34 +0000 (0:00:02.319) 0:00:21.523 ********** 2025-04-14 00:39:34.074888 | orchestrator | =============================================================================== 2025-04-14 00:39:34.075818 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.82s 2025-04-14 00:39:34.075915 | orchestrator | Apply netplan configuration --------------------------------------------- 2.97s 2025-04-14 00:39:34.075962 | orchestrator | Install python3-docker -------------------------------------------------- 2.32s 2025-04-14 00:39:34.076037 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.82s 2025-04-14 00:39:34.076540 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-04-14 00:39:34.076960 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.68s 2025-04-14 00:39:34.077619 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-04-14 00:39:34.077775 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2025-04-14 00:39:34.078133 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-04-14 00:39:34.078506 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.82s 2025-04-14 00:39:34.078941 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2025-04-14 00:39:34.079433 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-04-14 00:39:34.697848 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-04-14 00:39:36.175443 | orchestrator | 2025-04-14 00:39:36 | INFO  | Task 154770cf-ac05-4ece-b4c8-89bc65aaa486 (reboot) was prepared for execution. 2025-04-14 00:39:39.423651 | orchestrator | 2025-04-14 00:39:36 | INFO  | It takes a moment until task 154770cf-ac05-4ece-b4c8-89bc65aaa486 (reboot) has been started and output is visible here. 2025-04-14 00:39:39.424587 | orchestrator | 2025-04-14 00:39:39.425138 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-14 00:39:39.425471 | orchestrator | 2025-04-14 00:39:39.427556 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-14 00:39:39.428139 | orchestrator | Monday 14 April 2025 00:39:39 +0000 (0:00:00.153) 0:00:00.154 ********** 2025-04-14 00:39:39.537174 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:39:39.537375 | orchestrator | 2025-04-14 00:39:39.538106 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-14 00:39:39.538837 | orchestrator | Monday 14 April 2025 00:39:39 +0000 (0:00:00.116) 0:00:00.270 ********** 2025-04-14 00:39:40.462268 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:39:40.462823 | orchestrator | 2025-04-14 00:39:40.462869 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-14 00:39:40.463516 | orchestrator | Monday 14 April 2025 00:39:40 +0000 (0:00:00.924) 0:00:01.194 ********** 2025-04-14 00:39:40.605654 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:39:40.606327 | orchestrator | 2025-04-14 00:39:40.606901 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-14 00:39:40.607478 | orchestrator | 2025-04-14 00:39:40.607934 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-14 00:39:40.609563 | orchestrator | Monday 14 April 2025 00:39:40 +0000 (0:00:00.145) 0:00:01.340 ********** 2025-04-14 00:39:40.743244 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:39:40.743667 | orchestrator | 2025-04-14 00:39:40.744257 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-14 00:39:40.744944 | orchestrator | Monday 14 April 2025 00:39:40 +0000 (0:00:00.136) 0:00:01.476 ********** 2025-04-14 00:39:41.361512 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:39:41.361648 | orchestrator | 2025-04-14 00:39:41.361667 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-14 00:39:41.498886 | orchestrator | Monday 14 April 2025 00:39:41 +0000 (0:00:00.618) 0:00:02.094 ********** 2025-04-14 00:39:41.499013 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:39:41.499552 | orchestrator | 2025-04-14 00:39:41.499590 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-14 00:39:41.501457 | orchestrator | 2025-04-14 00:39:41.501855 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-14 00:39:41.502136 | orchestrator | Monday 14 April 2025 00:39:41 +0000 (0:00:00.134) 0:00:02.229 ********** 2025-04-14 00:39:41.607947 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:39:41.609233 | orchestrator | 2025-04-14 00:39:41.609945 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-14 00:39:41.611054 | orchestrator | Monday 14 April 2025 00:39:41 +0000 (0:00:00.111) 0:00:02.340 ********** 2025-04-14 00:39:42.393844 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:39:42.394381 | orchestrator | 2025-04-14 00:39:42.394410 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-14 00:39:42.397779 | orchestrator | Monday 14 April 2025 00:39:42 +0000 (0:00:00.785) 0:00:03.126 ********** 2025-04-14 00:39:42.508077 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:39:42.509844 | orchestrator | 2025-04-14 00:39:42.510260 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-14 00:39:42.512709 | orchestrator | 2025-04-14 00:39:42.513098 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-14 00:39:42.513158 | orchestrator | Monday 14 April 2025 00:39:42 +0000 (0:00:00.112) 0:00:03.239 ********** 2025-04-14 00:39:42.610967 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:39:42.611549 | orchestrator | 2025-04-14 00:39:42.612201 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-14 00:39:42.612758 | orchestrator | Monday 14 April 2025 00:39:42 +0000 (0:00:00.103) 0:00:03.342 ********** 2025-04-14 00:39:43.250360 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:39:43.250800 | orchestrator | 2025-04-14 00:39:43.251437 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-14 00:39:43.252203 | orchestrator | Monday 14 April 2025 00:39:43 +0000 (0:00:00.640) 0:00:03.982 ********** 2025-04-14 00:39:43.372905 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:39:43.374199 | orchestrator | 2025-04-14 00:39:43.375268 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-14 00:39:43.376446 | orchestrator | 2025-04-14 00:39:43.377417 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-14 00:39:43.378106 | orchestrator | Monday 14 April 2025 00:39:43 +0000 (0:00:00.121) 0:00:04.104 ********** 2025-04-14 00:39:43.483826 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:39:43.487392 | orchestrator | 2025-04-14 00:39:43.487934 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-14 00:39:43.488533 | orchestrator | Monday 14 April 2025 00:39:43 +0000 (0:00:00.112) 0:00:04.216 ********** 2025-04-14 00:39:44.180043 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:39:44.180276 | orchestrator | 2025-04-14 00:39:44.182217 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-14 00:39:44.182706 | orchestrator | Monday 14 April 2025 00:39:44 +0000 (0:00:00.695) 0:00:04.912 ********** 2025-04-14 00:39:44.302549 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:39:44.303499 | orchestrator | 2025-04-14 00:39:44.304495 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-04-14 00:39:44.305273 | orchestrator | 2025-04-14 00:39:44.306947 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-04-14 00:39:44.407840 | orchestrator | Monday 14 April 2025 00:39:44 +0000 (0:00:00.119) 0:00:05.032 ********** 2025-04-14 00:39:44.407952 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:39:44.408257 | orchestrator | 2025-04-14 00:39:44.409523 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-04-14 00:39:44.411570 | orchestrator | Monday 14 April 2025 00:39:44 +0000 (0:00:00.108) 0:00:05.141 ********** 2025-04-14 00:39:45.107225 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:39:45.107476 | orchestrator | 2025-04-14 00:39:45.107509 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-04-14 00:39:45.109507 | orchestrator | Monday 14 April 2025 00:39:45 +0000 (0:00:00.698) 0:00:05.839 ********** 2025-04-14 00:39:45.146875 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:39:45.148451 | orchestrator | 2025-04-14 00:39:45.150762 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:39:45.150796 | orchestrator | 2025-04-14 00:39:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:39:45.151428 | orchestrator | 2025-04-14 00:39:45 | INFO  | Please wait and do not abort execution. 2025-04-14 00:39:45.151448 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:45.152364 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:45.153595 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:45.154633 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:45.155513 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:45.156572 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:39:45.157374 | orchestrator | 2025-04-14 00:39:45.158122 | orchestrator | Monday 14 April 2025 00:39:45 +0000 (0:00:00.042) 0:00:05.881 ********** 2025-04-14 00:39:45.158713 | orchestrator | =============================================================================== 2025-04-14 00:39:45.160072 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.36s 2025-04-14 00:39:45.160560 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.69s 2025-04-14 00:39:45.161372 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2025-04-14 00:39:45.734675 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-04-14 00:39:47.197035 | orchestrator | 2025-04-14 00:39:47 | INFO  | Task d6ff0e1c-9785-407c-a598-694cbbf226bf (wait-for-connection) was prepared for execution. 2025-04-14 00:39:50.427047 | orchestrator | 2025-04-14 00:39:47 | INFO  | It takes a moment until task d6ff0e1c-9785-407c-a598-694cbbf226bf (wait-for-connection) has been started and output is visible here. 2025-04-14 00:39:50.427182 | orchestrator | 2025-04-14 00:39:50.429919 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-04-14 00:39:50.431051 | orchestrator | 2025-04-14 00:39:50.431119 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-04-14 00:39:50.431503 | orchestrator | Monday 14 April 2025 00:39:50 +0000 (0:00:00.176) 0:00:00.176 ********** 2025-04-14 00:40:03.521641 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:40:03.523747 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:40:03.523795 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:40:03.523820 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:40:03.524876 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:40:03.526461 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:40:03.529122 | orchestrator | 2025-04-14 00:40:03.531405 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:40:03.531487 | orchestrator | 2025-04-14 00:40:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:40:03.531928 | orchestrator | 2025-04-14 00:40:03 | INFO  | Please wait and do not abort execution. 2025-04-14 00:40:03.531962 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:40:03.532345 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:40:03.532948 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:40:03.533545 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:40:03.534142 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:40:03.534728 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:40:03.535222 | orchestrator | 2025-04-14 00:40:03.535542 | orchestrator | Monday 14 April 2025 00:40:03 +0000 (0:00:13.096) 0:00:13.272 ********** 2025-04-14 00:40:03.536162 | orchestrator | =============================================================================== 2025-04-14 00:40:03.536469 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.10s 2025-04-14 00:40:04.090590 | orchestrator | + osism apply hddtemp 2025-04-14 00:40:05.644427 | orchestrator | 2025-04-14 00:40:05 | INFO  | Task 016a6ac6-d9ea-454b-9aa0-96f948c97763 (hddtemp) was prepared for execution. 2025-04-14 00:40:09.039686 | orchestrator | 2025-04-14 00:40:05 | INFO  | It takes a moment until task 016a6ac6-d9ea-454b-9aa0-96f948c97763 (hddtemp) has been started and output is visible here. 2025-04-14 00:40:09.039822 | orchestrator | 2025-04-14 00:40:09.039934 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-04-14 00:40:09.039962 | orchestrator | 2025-04-14 00:40:09.040385 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-04-14 00:40:09.040887 | orchestrator | Monday 14 April 2025 00:40:09 +0000 (0:00:00.202) 0:00:00.202 ********** 2025-04-14 00:40:09.194777 | orchestrator | ok: [testbed-manager] 2025-04-14 00:40:09.273915 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:40:09.363110 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:40:09.448813 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:40:09.526562 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:40:09.762438 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:40:09.763137 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:40:09.764081 | orchestrator | 2025-04-14 00:40:09.770000 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-04-14 00:40:10.986945 | orchestrator | Monday 14 April 2025 00:40:09 +0000 (0:00:00.722) 0:00:00.925 ********** 2025-04-14 00:40:10.987119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:40:10.990432 | orchestrator | 2025-04-14 00:40:10.990538 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-04-14 00:40:13.004486 | orchestrator | Monday 14 April 2025 00:40:10 +0000 (0:00:01.221) 0:00:02.147 ********** 2025-04-14 00:40:13.004633 | orchestrator | ok: [testbed-manager] 2025-04-14 00:40:13.004975 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:40:13.005051 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:40:13.005120 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:40:13.006716 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:40:13.007411 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:40:13.008308 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:40:13.008535 | orchestrator | 2025-04-14 00:40:13.009118 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-04-14 00:40:13.009676 | orchestrator | Monday 14 April 2025 00:40:12 +0000 (0:00:02.021) 0:00:04.168 ********** 2025-04-14 00:40:13.652726 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:40:13.752557 | orchestrator | changed: [testbed-manager] 2025-04-14 00:40:14.183769 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:40:14.184332 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:40:14.184985 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:40:14.186903 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:40:14.187955 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:40:14.190307 | orchestrator | 2025-04-14 00:40:14.190482 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-04-14 00:40:14.191432 | orchestrator | Monday 14 April 2025 00:40:14 +0000 (0:00:01.175) 0:00:05.344 ********** 2025-04-14 00:40:15.505683 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:40:15.506305 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:40:15.506354 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:40:15.507167 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:40:15.507633 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:40:15.508692 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:40:15.509181 | orchestrator | ok: [testbed-manager] 2025-04-14 00:40:15.509776 | orchestrator | 2025-04-14 00:40:15.510128 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-04-14 00:40:15.510581 | orchestrator | Monday 14 April 2025 00:40:15 +0000 (0:00:01.322) 0:00:06.667 ********** 2025-04-14 00:40:15.795469 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:40:15.881401 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:40:15.969598 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:40:16.049721 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:40:16.184476 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:40:16.184946 | orchestrator | changed: [testbed-manager] 2025-04-14 00:40:16.190144 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:40:16.190310 | orchestrator | 2025-04-14 00:40:16.191921 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-04-14 00:40:16.193434 | orchestrator | Monday 14 April 2025 00:40:16 +0000 (0:00:00.681) 0:00:07.348 ********** 2025-04-14 00:40:28.244170 | orchestrator | changed: [testbed-manager] 2025-04-14 00:40:28.247528 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:40:28.248716 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:40:28.248806 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:40:28.248824 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:40:28.248840 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:40:28.248866 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:40:28.249594 | orchestrator | 2025-04-14 00:40:28.250496 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-04-14 00:40:28.251382 | orchestrator | Monday 14 April 2025 00:40:28 +0000 (0:00:12.052) 0:00:19.401 ********** 2025-04-14 00:40:29.483163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:40:29.483524 | orchestrator | 2025-04-14 00:40:29.483918 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-04-14 00:40:29.484628 | orchestrator | Monday 14 April 2025 00:40:29 +0000 (0:00:01.243) 0:00:20.644 ********** 2025-04-14 00:40:31.388209 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:40:31.389222 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:40:31.391309 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:40:31.393693 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:40:31.395588 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:40:31.396161 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:40:31.397398 | orchestrator | changed: [testbed-manager] 2025-04-14 00:40:31.398188 | orchestrator | 2025-04-14 00:40:31.400709 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:40:31.400873 | orchestrator | 2025-04-14 00:40:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:40:31.402154 | orchestrator | 2025-04-14 00:40:31 | INFO  | Please wait and do not abort execution. 2025-04-14 00:40:31.402288 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:40:31.402902 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:31.403492 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:31.404092 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:31.405322 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:31.405854 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:31.407106 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:31.408018 | orchestrator | 2025-04-14 00:40:31.408468 | orchestrator | Monday 14 April 2025 00:40:31 +0000 (0:00:01.906) 0:00:22.551 ********** 2025-04-14 00:40:31.409287 | orchestrator | =============================================================================== 2025-04-14 00:40:31.410084 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.05s 2025-04-14 00:40:31.410579 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.02s 2025-04-14 00:40:31.410921 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.91s 2025-04-14 00:40:31.411539 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.32s 2025-04-14 00:40:31.415561 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.24s 2025-04-14 00:40:31.421134 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.22s 2025-04-14 00:40:31.421207 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-04-14 00:40:31.421633 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2025-04-14 00:40:31.422251 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.68s 2025-04-14 00:40:32.073372 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-04-14 00:40:33.446761 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-04-14 00:40:33.447338 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-04-14 00:40:33.447400 | orchestrator | + local max_attempts=60 2025-04-14 00:40:33.447418 | orchestrator | + local name=ceph-ansible 2025-04-14 00:40:33.447434 | orchestrator | + local attempt_num=1 2025-04-14 00:40:33.447456 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-04-14 00:40:33.491180 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-14 00:40:33.491423 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-04-14 00:40:33.491457 | orchestrator | + local max_attempts=60 2025-04-14 00:40:33.491475 | orchestrator | + local name=kolla-ansible 2025-04-14 00:40:33.491489 | orchestrator | + local attempt_num=1 2025-04-14 00:40:33.491510 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-04-14 00:40:33.525005 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-14 00:40:33.525477 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-04-14 00:40:33.525513 | orchestrator | + local max_attempts=60 2025-04-14 00:40:33.525530 | orchestrator | + local name=osism-ansible 2025-04-14 00:40:33.525545 | orchestrator | + local attempt_num=1 2025-04-14 00:40:33.525565 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-04-14 00:40:33.552994 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-04-14 00:40:33.709696 | orchestrator | + [[ true == \t\r\u\e ]] 2025-04-14 00:40:33.709812 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-04-14 00:40:33.709849 | orchestrator | ARA in ceph-ansible already disabled. 2025-04-14 00:40:33.866820 | orchestrator | ARA in kolla-ansible already disabled. 2025-04-14 00:40:34.046213 | orchestrator | ARA in osism-ansible already disabled. 2025-04-14 00:40:34.206820 | orchestrator | ARA in osism-kubernetes already disabled. 2025-04-14 00:40:34.208202 | orchestrator | + osism apply gather-facts 2025-04-14 00:40:35.751715 | orchestrator | 2025-04-14 00:40:35 | INFO  | Task d5bae03d-fc17-49ec-b32c-feb42cf8ca1d (gather-facts) was prepared for execution. 2025-04-14 00:40:39.119611 | orchestrator | 2025-04-14 00:40:35 | INFO  | It takes a moment until task d5bae03d-fc17-49ec-b32c-feb42cf8ca1d (gather-facts) has been started and output is visible here. 2025-04-14 00:40:39.119759 | orchestrator | 2025-04-14 00:40:39.120300 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-14 00:40:39.122668 | orchestrator | 2025-04-14 00:40:39.123797 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-14 00:40:39.125364 | orchestrator | Monday 14 April 2025 00:40:39 +0000 (0:00:00.183) 0:00:00.183 ********** 2025-04-14 00:40:44.090803 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:40:44.091794 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:40:44.091865 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:40:44.092148 | orchestrator | ok: [testbed-manager] 2025-04-14 00:40:44.092181 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:40:44.092319 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:40:44.092800 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:40:44.093086 | orchestrator | 2025-04-14 00:40:44.093568 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-14 00:40:44.093820 | orchestrator | 2025-04-14 00:40:44.094335 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-14 00:40:44.094724 | orchestrator | Monday 14 April 2025 00:40:44 +0000 (0:00:04.974) 0:00:05.158 ********** 2025-04-14 00:40:44.246199 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:40:44.321106 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:40:44.401578 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:40:44.482395 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:40:44.560199 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:40:44.595019 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:40:44.595511 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:40:44.596328 | orchestrator | 2025-04-14 00:40:44.597860 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:40:44.597932 | orchestrator | 2025-04-14 00:40:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:40:44.598426 | orchestrator | 2025-04-14 00:40:44 | INFO  | Please wait and do not abort execution. 2025-04-14 00:40:44.598468 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:44.598869 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:44.599740 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:44.599962 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:44.600763 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:44.601311 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:44.601690 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 00:40:44.602136 | orchestrator | 2025-04-14 00:40:44.602542 | orchestrator | Monday 14 April 2025 00:40:44 +0000 (0:00:00.504) 0:00:05.662 ********** 2025-04-14 00:40:44.602864 | orchestrator | =============================================================================== 2025-04-14 00:40:44.603363 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.97s 2025-04-14 00:40:44.603678 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-04-14 00:40:45.244542 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-04-14 00:40:45.257129 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-04-14 00:40:45.269372 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-04-14 00:40:45.287066 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-04-14 00:40:45.299676 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-04-14 00:40:45.318424 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-04-14 00:40:45.335781 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-04-14 00:40:45.352064 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-04-14 00:40:45.371796 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-04-14 00:40:45.388111 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-04-14 00:40:45.405490 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-04-14 00:40:45.420320 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-04-14 00:40:45.434456 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-04-14 00:40:45.450203 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-04-14 00:40:45.466473 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-04-14 00:40:45.478469 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-04-14 00:40:45.497802 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-04-14 00:40:45.512811 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-04-14 00:40:45.529846 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-04-14 00:40:45.544877 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-04-14 00:40:45.559424 | orchestrator | + [[ false == \t\r\u\e ]] 2025-04-14 00:40:45.912223 | orchestrator | changed 2025-04-14 00:40:45.979598 | 2025-04-14 00:40:45.979730 | TASK [Deploy services] 2025-04-14 00:40:46.119159 | orchestrator | skipping: Conditional result was False 2025-04-14 00:40:46.140211 | 2025-04-14 00:40:46.140369 | TASK [Deploy in a nutshell] 2025-04-14 00:40:46.868360 | orchestrator | + set -e 2025-04-14 00:40:46.868607 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-04-14 00:40:46.868642 | orchestrator | ++ export INTERACTIVE=false 2025-04-14 00:40:46.868660 | orchestrator | ++ INTERACTIVE=false 2025-04-14 00:40:46.868703 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-04-14 00:40:46.868721 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-04-14 00:40:46.868737 | orchestrator | + source /opt/manager-vars.sh 2025-04-14 00:40:46.868761 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-04-14 00:40:46.868785 | orchestrator | ++ NUMBER_OF_NODES=6 2025-04-14 00:40:46.868801 | orchestrator | ++ export CEPH_VERSION=quincy 2025-04-14 00:40:46.868815 | orchestrator | ++ CEPH_VERSION=quincy 2025-04-14 00:40:46.868830 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-04-14 00:40:46.868844 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-04-14 00:40:46.868872 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-04-14 00:40:46.870350 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-04-14 00:40:46.870451 | orchestrator | ++ export OPENSTACK_VERSION=2024.1 2025-04-14 00:40:46.870468 | orchestrator | ++ OPENSTACK_VERSION=2024.1 2025-04-14 00:40:46.870480 | orchestrator | ++ export ARA=false 2025-04-14 00:40:46.870491 | orchestrator | ++ ARA=false 2025-04-14 00:40:46.870502 | orchestrator | ++ export TEMPEST=false 2025-04-14 00:40:46.870512 | orchestrator | ++ TEMPEST=false 2025-04-14 00:40:46.870522 | orchestrator | ++ export IS_ZUUL=true 2025-04-14 00:40:46.870532 | orchestrator | ++ IS_ZUUL=true 2025-04-14 00:40:46.870542 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-04-14 00:40:46.870554 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.183 2025-04-14 00:40:46.870565 | orchestrator | ++ export EXTERNAL_API=false 2025-04-14 00:40:46.870575 | orchestrator | ++ EXTERNAL_API=false 2025-04-14 00:40:46.870585 | orchestrator | 2025-04-14 00:40:46.870595 | orchestrator | # PULL IMAGES 2025-04-14 00:40:46.870606 | orchestrator | 2025-04-14 00:40:46.870616 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-04-14 00:40:46.870626 | orchestrator | ++ IMAGE_USER=ubuntu 2025-04-14 00:40:46.870648 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-04-14 00:40:46.870659 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-04-14 00:40:46.870669 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-04-14 00:40:46.870679 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-04-14 00:40:46.870689 | orchestrator | + echo 2025-04-14 00:40:46.870700 | orchestrator | + echo '# PULL IMAGES' 2025-04-14 00:40:46.870710 | orchestrator | + echo 2025-04-14 00:40:46.870735 | orchestrator | ++ semver 8.1.0 7.0.0 2025-04-14 00:40:46.931856 | orchestrator | + [[ 1 -ge 0 ]] 2025-04-14 00:40:48.409793 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-04-14 00:40:48.409963 | orchestrator | 2025-04-14 00:40:48 | INFO  | Trying to run play pull-images in environment custom 2025-04-14 00:40:48.459280 | orchestrator | 2025-04-14 00:40:48 | INFO  | Task 4d70d73d-ba80-4258-aaa3-cc72b63bcb73 (pull-images) was prepared for execution. 2025-04-14 00:40:51.666340 | orchestrator | 2025-04-14 00:40:48 | INFO  | It takes a moment until task 4d70d73d-ba80-4258-aaa3-cc72b63bcb73 (pull-images) has been started and output is visible here. 2025-04-14 00:40:51.667160 | orchestrator | 2025-04-14 00:40:51.668714 | orchestrator | PLAY [Pull images] ************************************************************* 2025-04-14 00:40:51.669492 | orchestrator | 2025-04-14 00:40:51.670157 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-04-14 00:40:51.671393 | orchestrator | Monday 14 April 2025 00:40:51 +0000 (0:00:00.149) 0:00:00.149 ********** 2025-04-14 00:41:26.280462 | orchestrator | changed: [testbed-manager] 2025-04-14 00:42:17.489881 | orchestrator | 2025-04-14 00:42:17.490177 | orchestrator | TASK [Pull other images] ******************************************************* 2025-04-14 00:42:17.490283 | orchestrator | Monday 14 April 2025 00:41:26 +0000 (0:00:34.611) 0:00:34.760 ********** 2025-04-14 00:42:17.490329 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-04-14 00:42:17.491065 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-04-14 00:42:17.491105 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-04-14 00:42:17.491129 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-04-14 00:42:17.491170 | orchestrator | changed: [testbed-manager] => (item=common) 2025-04-14 00:42:17.491208 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-04-14 00:42:17.491224 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-04-14 00:42:17.491241 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-04-14 00:42:17.491287 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-04-14 00:42:17.491310 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-04-14 00:42:17.491637 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-04-14 00:42:17.491844 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-04-14 00:42:17.491877 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-04-14 00:42:17.492104 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-04-14 00:42:17.492352 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-04-14 00:42:17.493142 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-04-14 00:42:17.493473 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-04-14 00:42:17.493501 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-04-14 00:42:17.493521 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-04-14 00:42:17.493716 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-04-14 00:42:17.493933 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-04-14 00:42:17.494215 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-04-14 00:42:17.494477 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-04-14 00:42:17.494807 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-04-14 00:42:17.495079 | orchestrator | 2025-04-14 00:42:17.495759 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:42:17.496031 | orchestrator | 2025-04-14 00:42:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:42:17.496128 | orchestrator | 2025-04-14 00:42:17 | INFO  | Please wait and do not abort execution. 2025-04-14 00:42:17.496598 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:42:17.497511 | orchestrator | 2025-04-14 00:42:17.501894 | orchestrator | Monday 14 April 2025 00:42:17 +0000 (0:00:51.209) 0:01:25.970 ********** 2025-04-14 00:42:19.770293 | orchestrator | =============================================================================== 2025-04-14 00:42:19.770409 | orchestrator | Pull other images ------------------------------------------------------ 51.21s 2025-04-14 00:42:19.770428 | orchestrator | Pull keystone image ---------------------------------------------------- 34.61s 2025-04-14 00:42:19.770459 | orchestrator | 2025-04-14 00:42:19 | INFO  | Trying to run play wipe-partitions in environment custom 2025-04-14 00:42:19.819886 | orchestrator | 2025-04-14 00:42:19 | INFO  | Task ef7b207d-19f2-4841-bc98-57c10ee6b51c (wipe-partitions) was prepared for execution. 2025-04-14 00:42:23.241816 | orchestrator | 2025-04-14 00:42:19 | INFO  | It takes a moment until task ef7b207d-19f2-4841-bc98-57c10ee6b51c (wipe-partitions) has been started and output is visible here. 2025-04-14 00:42:23.242009 | orchestrator | 2025-04-14 00:42:23.242143 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-04-14 00:42:23.242243 | orchestrator | 2025-04-14 00:42:23.242265 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-04-14 00:42:23.242326 | orchestrator | Monday 14 April 2025 00:42:23 +0000 (0:00:00.137) 0:00:00.137 ********** 2025-04-14 00:42:23.837462 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:42:23.838285 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:42:23.838332 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:42:23.839211 | orchestrator | 2025-04-14 00:42:23.839548 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-04-14 00:42:23.839953 | orchestrator | Monday 14 April 2025 00:42:23 +0000 (0:00:00.597) 0:00:00.735 ********** 2025-04-14 00:42:24.013272 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:24.103369 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:42:24.103554 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:42:24.103576 | orchestrator | 2025-04-14 00:42:24.103627 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-04-14 00:42:24.106312 | orchestrator | Monday 14 April 2025 00:42:24 +0000 (0:00:00.267) 0:00:01.003 ********** 2025-04-14 00:42:24.870813 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:42:24.871442 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:42:24.871490 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:42:24.872861 | orchestrator | 2025-04-14 00:42:24.873074 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-04-14 00:42:24.873238 | orchestrator | Monday 14 April 2025 00:42:24 +0000 (0:00:00.762) 0:00:01.765 ********** 2025-04-14 00:42:25.034804 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:25.140704 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:42:25.143084 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:42:25.143145 | orchestrator | 2025-04-14 00:42:25.143163 | orchestrator | TASK [Check device availability] *********************************************** 2025-04-14 00:42:25.143219 | orchestrator | Monday 14 April 2025 00:42:25 +0000 (0:00:00.274) 0:00:02.039 ********** 2025-04-14 00:42:26.299022 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-14 00:42:26.299230 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-14 00:42:26.299257 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-14 00:42:26.299316 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-14 00:42:26.299379 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-14 00:42:26.299553 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-14 00:42:26.300468 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-14 00:42:26.301805 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-14 00:42:26.301864 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-14 00:42:26.301886 | orchestrator | 2025-04-14 00:42:26.302895 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-04-14 00:42:26.302937 | orchestrator | Monday 14 April 2025 00:42:26 +0000 (0:00:01.156) 0:00:03.196 ********** 2025-04-14 00:42:27.641647 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-04-14 00:42:27.642585 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-04-14 00:42:27.642631 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-04-14 00:42:27.642647 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-04-14 00:42:27.642661 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-04-14 00:42:27.642675 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-04-14 00:42:27.642699 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-04-14 00:42:27.642769 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-04-14 00:42:27.642787 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-04-14 00:42:27.642805 | orchestrator | 2025-04-14 00:42:27.643217 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-04-14 00:42:27.643469 | orchestrator | Monday 14 April 2025 00:42:27 +0000 (0:00:01.342) 0:00:04.539 ********** 2025-04-14 00:42:29.975887 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-04-14 00:42:29.979732 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-04-14 00:42:29.984259 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-04-14 00:42:29.988129 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-04-14 00:42:29.992541 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-04-14 00:42:29.993474 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-04-14 00:42:29.997169 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-04-14 00:42:29.997837 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-04-14 00:42:29.998456 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-04-14 00:42:30.002596 | orchestrator | 2025-04-14 00:42:30.003417 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-04-14 00:42:30.005135 | orchestrator | Monday 14 April 2025 00:42:29 +0000 (0:00:02.330) 0:00:06.869 ********** 2025-04-14 00:42:30.641713 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:42:30.642262 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:42:30.644857 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:42:30.645123 | orchestrator | 2025-04-14 00:42:30.646114 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-04-14 00:42:30.649264 | orchestrator | Monday 14 April 2025 00:42:30 +0000 (0:00:00.669) 0:00:07.539 ********** 2025-04-14 00:42:31.291253 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:42:31.291938 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:42:31.293844 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:42:31.294447 | orchestrator | 2025-04-14 00:42:31.295391 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:42:31.296448 | orchestrator | 2025-04-14 00:42:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:42:31.297086 | orchestrator | 2025-04-14 00:42:31 | INFO  | Please wait and do not abort execution. 2025-04-14 00:42:31.297111 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:31.297788 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:31.298648 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:31.298840 | orchestrator | 2025-04-14 00:42:31.299729 | orchestrator | Monday 14 April 2025 00:42:31 +0000 (0:00:00.650) 0:00:08.189 ********** 2025-04-14 00:42:31.300307 | orchestrator | =============================================================================== 2025-04-14 00:42:31.300937 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.33s 2025-04-14 00:42:31.302011 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.34s 2025-04-14 00:42:31.302130 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2025-04-14 00:42:31.302939 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.76s 2025-04-14 00:42:31.303294 | orchestrator | Reload udev rules ------------------------------------------------------- 0.67s 2025-04-14 00:42:31.303865 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-04-14 00:42:31.304492 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2025-04-14 00:42:31.305009 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.27s 2025-04-14 00:42:31.305331 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-04-14 00:42:33.482508 | orchestrator | 2025-04-14 00:42:33 | INFO  | Task 95473961-7796-40d3-b55b-e694fcd52c6c (facts) was prepared for execution. 2025-04-14 00:42:37.085551 | orchestrator | 2025-04-14 00:42:33 | INFO  | It takes a moment until task 95473961-7796-40d3-b55b-e694fcd52c6c (facts) has been started and output is visible here. 2025-04-14 00:42:37.085703 | orchestrator | 2025-04-14 00:42:37.089164 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-14 00:42:37.092902 | orchestrator | 2025-04-14 00:42:37.092932 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-14 00:42:37.092953 | orchestrator | Monday 14 April 2025 00:42:37 +0000 (0:00:00.223) 0:00:00.223 ********** 2025-04-14 00:42:38.306343 | orchestrator | ok: [testbed-manager] 2025-04-14 00:42:38.307089 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:42:38.307125 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:42:38.307148 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:42:38.307382 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:42:38.307671 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:42:38.308092 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:42:38.308636 | orchestrator | 2025-04-14 00:42:38.309037 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-14 00:42:38.310352 | orchestrator | Monday 14 April 2025 00:42:38 +0000 (0:00:01.217) 0:00:01.441 ********** 2025-04-14 00:42:38.538229 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:42:38.627829 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:42:38.734867 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:42:38.871261 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:42:38.994589 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:40.011219 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:42:40.015522 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:42:40.017267 | orchestrator | 2025-04-14 00:42:40.017319 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-14 00:42:40.017333 | orchestrator | 2025-04-14 00:42:40.018069 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-14 00:42:40.018377 | orchestrator | Monday 14 April 2025 00:42:40 +0000 (0:00:01.707) 0:00:03.149 ********** 2025-04-14 00:42:44.676539 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:42:44.678759 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:42:44.681926 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:42:44.685421 | orchestrator | ok: [testbed-manager] 2025-04-14 00:42:44.687029 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:42:44.691335 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:42:44.693042 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:42:44.694572 | orchestrator | 2025-04-14 00:42:44.696511 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-14 00:42:44.698061 | orchestrator | 2025-04-14 00:42:44.698477 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-14 00:42:44.698959 | orchestrator | Monday 14 April 2025 00:42:44 +0000 (0:00:04.666) 0:00:07.815 ********** 2025-04-14 00:42:45.029006 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:42:45.106465 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:42:45.182843 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:42:45.266763 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:42:45.357093 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:45.389601 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:42:45.389749 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:42:45.390403 | orchestrator | 2025-04-14 00:42:45.390813 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:42:45.391431 | orchestrator | 2025-04-14 00:42:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:42:45.392372 | orchestrator | 2025-04-14 00:42:45 | INFO  | Please wait and do not abort execution. 2025-04-14 00:42:45.392390 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:45.394308 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:45.395232 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:45.396160 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:45.397102 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:45.397674 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:45.398001 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:42:45.398531 | orchestrator | 2025-04-14 00:42:45.398861 | orchestrator | Monday 14 April 2025 00:42:45 +0000 (0:00:00.716) 0:00:08.532 ********** 2025-04-14 00:42:45.399251 | orchestrator | =============================================================================== 2025-04-14 00:42:45.399900 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.67s 2025-04-14 00:42:45.400052 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.71s 2025-04-14 00:42:45.400541 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2025-04-14 00:42:45.400980 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-04-14 00:42:47.619780 | orchestrator | 2025-04-14 00:42:47 | INFO  | Task 87949cb1-ac0d-473b-97a2-6316a64eb4d5 (ceph-configure-lvm-volumes) was prepared for execution. 2025-04-14 00:42:51.033878 | orchestrator | 2025-04-14 00:42:47 | INFO  | It takes a moment until task 87949cb1-ac0d-473b-97a2-6316a64eb4d5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-04-14 00:42:51.034095 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-14 00:42:51.684568 | orchestrator | 2025-04-14 00:42:51.688367 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-14 00:42:51.688429 | orchestrator | 2025-04-14 00:42:51.688468 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-14 00:42:51.689264 | orchestrator | Monday 14 April 2025 00:42:51 +0000 (0:00:00.564) 0:00:00.564 ********** 2025-04-14 00:42:51.958901 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-14 00:42:51.959501 | orchestrator | 2025-04-14 00:42:51.962476 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-14 00:42:51.962940 | orchestrator | Monday 14 April 2025 00:42:51 +0000 (0:00:00.277) 0:00:00.842 ********** 2025-04-14 00:42:52.169607 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:42:52.169770 | orchestrator | 2025-04-14 00:42:52.170578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:52.170963 | orchestrator | Monday 14 April 2025 00:42:52 +0000 (0:00:00.211) 0:00:01.054 ********** 2025-04-14 00:42:52.699594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-14 00:42:52.700452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-14 00:42:52.701823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-14 00:42:52.701859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-14 00:42:52.704760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-14 00:42:52.705720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-14 00:42:52.707656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-14 00:42:52.707881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-14 00:42:52.709208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-14 00:42:52.712260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-14 00:42:52.712789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-14 00:42:52.713286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-14 00:42:52.713973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-14 00:42:52.714758 | orchestrator | 2025-04-14 00:42:52.715031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:52.715060 | orchestrator | Monday 14 April 2025 00:42:52 +0000 (0:00:00.526) 0:00:01.580 ********** 2025-04-14 00:42:52.899502 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:52.900191 | orchestrator | 2025-04-14 00:42:52.900642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:52.907651 | orchestrator | Monday 14 April 2025 00:42:52 +0000 (0:00:00.201) 0:00:01.782 ********** 2025-04-14 00:42:53.145268 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:53.146963 | orchestrator | 2025-04-14 00:42:53.384950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:53.385096 | orchestrator | Monday 14 April 2025 00:42:53 +0000 (0:00:00.247) 0:00:02.029 ********** 2025-04-14 00:42:53.385280 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:53.385388 | orchestrator | 2025-04-14 00:42:53.385418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:53.385450 | orchestrator | Monday 14 April 2025 00:42:53 +0000 (0:00:00.234) 0:00:02.264 ********** 2025-04-14 00:42:53.619567 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:53.622344 | orchestrator | 2025-04-14 00:42:53.622447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:53.622515 | orchestrator | Monday 14 April 2025 00:42:53 +0000 (0:00:00.238) 0:00:02.503 ********** 2025-04-14 00:42:53.922465 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:53.926567 | orchestrator | 2025-04-14 00:42:53.926690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:53.928309 | orchestrator | Monday 14 April 2025 00:42:53 +0000 (0:00:00.303) 0:00:02.806 ********** 2025-04-14 00:42:54.161656 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:54.162093 | orchestrator | 2025-04-14 00:42:54.162399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:54.162751 | orchestrator | Monday 14 April 2025 00:42:54 +0000 (0:00:00.239) 0:00:03.045 ********** 2025-04-14 00:42:54.449058 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:54.449375 | orchestrator | 2025-04-14 00:42:54.449691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:54.453333 | orchestrator | Monday 14 April 2025 00:42:54 +0000 (0:00:00.286) 0:00:03.332 ********** 2025-04-14 00:42:54.666624 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:54.667067 | orchestrator | 2025-04-14 00:42:54.667125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:54.667430 | orchestrator | Monday 14 April 2025 00:42:54 +0000 (0:00:00.218) 0:00:03.550 ********** 2025-04-14 00:42:55.357091 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d) 2025-04-14 00:42:55.359614 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d) 2025-04-14 00:42:55.360019 | orchestrator | 2025-04-14 00:42:55.360368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:55.360771 | orchestrator | Monday 14 April 2025 00:42:55 +0000 (0:00:00.685) 0:00:04.236 ********** 2025-04-14 00:42:56.435710 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c26cfb84-2784-4068-ac39-279abdffc82e) 2025-04-14 00:42:56.435882 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c26cfb84-2784-4068-ac39-279abdffc82e) 2025-04-14 00:42:56.436861 | orchestrator | 2025-04-14 00:42:56.440189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:56.440429 | orchestrator | Monday 14 April 2025 00:42:56 +0000 (0:00:01.079) 0:00:05.316 ********** 2025-04-14 00:42:57.134218 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_938a8574-ab31-4693-953b-ad06db98cc0e) 2025-04-14 00:42:57.134421 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_938a8574-ab31-4693-953b-ad06db98cc0e) 2025-04-14 00:42:57.135309 | orchestrator | 2025-04-14 00:42:57.136302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:57.136822 | orchestrator | Monday 14 April 2025 00:42:57 +0000 (0:00:00.696) 0:00:06.013 ********** 2025-04-14 00:42:57.954835 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0623da07-2b86-4b0f-8ae6-479bebb1d3d2) 2025-04-14 00:42:57.957226 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0623da07-2b86-4b0f-8ae6-479bebb1d3d2) 2025-04-14 00:42:57.960710 | orchestrator | 2025-04-14 00:42:57.960767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:42:57.962294 | orchestrator | Monday 14 April 2025 00:42:57 +0000 (0:00:00.823) 0:00:06.837 ********** 2025-04-14 00:42:58.583761 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-14 00:42:58.583930 | orchestrator | 2025-04-14 00:42:58.584854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:42:58.585962 | orchestrator | Monday 14 April 2025 00:42:58 +0000 (0:00:00.629) 0:00:07.466 ********** 2025-04-14 00:42:59.131627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-14 00:42:59.133075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-14 00:42:59.133682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-14 00:42:59.135392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-14 00:42:59.139777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-14 00:42:59.140131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-14 00:42:59.140656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-14 00:42:59.142322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-14 00:42:59.146863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-14 00:42:59.151359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-14 00:42:59.152233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-14 00:42:59.152273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-14 00:42:59.153383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-14 00:42:59.154586 | orchestrator | 2025-04-14 00:42:59.154807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:42:59.157828 | orchestrator | Monday 14 April 2025 00:42:59 +0000 (0:00:00.547) 0:00:08.013 ********** 2025-04-14 00:42:59.421793 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:59.422077 | orchestrator | 2025-04-14 00:42:59.425081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:42:59.425989 | orchestrator | Monday 14 April 2025 00:42:59 +0000 (0:00:00.289) 0:00:08.303 ********** 2025-04-14 00:42:59.624403 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:59.624914 | orchestrator | 2025-04-14 00:42:59.625368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:42:59.626046 | orchestrator | Monday 14 April 2025 00:42:59 +0000 (0:00:00.205) 0:00:08.508 ********** 2025-04-14 00:42:59.832304 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:42:59.832480 | orchestrator | 2025-04-14 00:42:59.833548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:42:59.836119 | orchestrator | Monday 14 April 2025 00:42:59 +0000 (0:00:00.206) 0:00:08.714 ********** 2025-04-14 00:43:00.061620 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:00.063357 | orchestrator | 2025-04-14 00:43:00.063414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:00.064926 | orchestrator | Monday 14 April 2025 00:43:00 +0000 (0:00:00.225) 0:00:08.939 ********** 2025-04-14 00:43:00.713218 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:00.714382 | orchestrator | 2025-04-14 00:43:00.717985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:00.720652 | orchestrator | Monday 14 April 2025 00:43:00 +0000 (0:00:00.656) 0:00:09.596 ********** 2025-04-14 00:43:00.959077 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:00.961333 | orchestrator | 2025-04-14 00:43:00.961400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:00.963933 | orchestrator | Monday 14 April 2025 00:43:00 +0000 (0:00:00.241) 0:00:09.838 ********** 2025-04-14 00:43:01.193896 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:01.199171 | orchestrator | 2025-04-14 00:43:01.199289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:01.199306 | orchestrator | Monday 14 April 2025 00:43:01 +0000 (0:00:00.233) 0:00:10.072 ********** 2025-04-14 00:43:01.405508 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:01.407335 | orchestrator | 2025-04-14 00:43:01.407391 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:01.407846 | orchestrator | Monday 14 April 2025 00:43:01 +0000 (0:00:00.217) 0:00:10.289 ********** 2025-04-14 00:43:02.086770 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-14 00:43:02.086941 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-14 00:43:02.086974 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-14 00:43:02.087782 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-14 00:43:02.325336 | orchestrator | 2025-04-14 00:43:02.325457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:02.325476 | orchestrator | Monday 14 April 2025 00:43:02 +0000 (0:00:00.681) 0:00:10.970 ********** 2025-04-14 00:43:02.325505 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:02.325762 | orchestrator | 2025-04-14 00:43:02.325793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:02.325928 | orchestrator | Monday 14 April 2025 00:43:02 +0000 (0:00:00.239) 0:00:11.210 ********** 2025-04-14 00:43:02.504394 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:02.505023 | orchestrator | 2025-04-14 00:43:02.506769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:02.507799 | orchestrator | Monday 14 April 2025 00:43:02 +0000 (0:00:00.177) 0:00:11.387 ********** 2025-04-14 00:43:02.703261 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:02.705145 | orchestrator | 2025-04-14 00:43:02.706520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:02.707408 | orchestrator | Monday 14 April 2025 00:43:02 +0000 (0:00:00.199) 0:00:11.587 ********** 2025-04-14 00:43:02.899490 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:02.902368 | orchestrator | 2025-04-14 00:43:02.903381 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-14 00:43:02.904031 | orchestrator | Monday 14 April 2025 00:43:02 +0000 (0:00:00.195) 0:00:11.782 ********** 2025-04-14 00:43:03.082971 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-04-14 00:43:03.083262 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-04-14 00:43:03.083300 | orchestrator | 2025-04-14 00:43:03.084130 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-14 00:43:03.085227 | orchestrator | Monday 14 April 2025 00:43:03 +0000 (0:00:00.184) 0:00:11.966 ********** 2025-04-14 00:43:03.419058 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:03.421278 | orchestrator | 2025-04-14 00:43:03.422975 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-14 00:43:03.425730 | orchestrator | Monday 14 April 2025 00:43:03 +0000 (0:00:00.334) 0:00:12.301 ********** 2025-04-14 00:43:03.605566 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:03.607266 | orchestrator | 2025-04-14 00:43:03.610140 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-14 00:43:03.610552 | orchestrator | Monday 14 April 2025 00:43:03 +0000 (0:00:00.183) 0:00:12.485 ********** 2025-04-14 00:43:03.785648 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:03.787061 | orchestrator | 2025-04-14 00:43:03.788269 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-14 00:43:03.789274 | orchestrator | Monday 14 April 2025 00:43:03 +0000 (0:00:00.182) 0:00:12.667 ********** 2025-04-14 00:43:03.972619 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:43:03.975446 | orchestrator | 2025-04-14 00:43:03.977031 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-14 00:43:03.977256 | orchestrator | Monday 14 April 2025 00:43:03 +0000 (0:00:00.186) 0:00:12.854 ********** 2025-04-14 00:43:04.160701 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '010b5855-d3d9-5348-85e9-2943091c3a59'}}) 2025-04-14 00:43:04.162669 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47a37963-cc76-524e-bf57-deb935e0a7e9'}}) 2025-04-14 00:43:04.162712 | orchestrator | 2025-04-14 00:43:04.162737 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-14 00:43:04.164456 | orchestrator | Monday 14 April 2025 00:43:04 +0000 (0:00:00.186) 0:00:13.041 ********** 2025-04-14 00:43:04.358338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '010b5855-d3d9-5348-85e9-2943091c3a59'}})  2025-04-14 00:43:04.361996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47a37963-cc76-524e-bf57-deb935e0a7e9'}})  2025-04-14 00:43:04.362969 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:04.364204 | orchestrator | 2025-04-14 00:43:04.364943 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-14 00:43:04.365409 | orchestrator | Monday 14 April 2025 00:43:04 +0000 (0:00:00.197) 0:00:13.239 ********** 2025-04-14 00:43:04.537000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '010b5855-d3d9-5348-85e9-2943091c3a59'}})  2025-04-14 00:43:04.537557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47a37963-cc76-524e-bf57-deb935e0a7e9'}})  2025-04-14 00:43:04.538744 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:04.539768 | orchestrator | 2025-04-14 00:43:04.541401 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-14 00:43:04.543386 | orchestrator | Monday 14 April 2025 00:43:04 +0000 (0:00:00.181) 0:00:13.420 ********** 2025-04-14 00:43:04.705421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '010b5855-d3d9-5348-85e9-2943091c3a59'}})  2025-04-14 00:43:04.706551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47a37963-cc76-524e-bf57-deb935e0a7e9'}})  2025-04-14 00:43:04.707689 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:04.708415 | orchestrator | 2025-04-14 00:43:04.708756 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-14 00:43:04.711302 | orchestrator | Monday 14 April 2025 00:43:04 +0000 (0:00:00.167) 0:00:13.588 ********** 2025-04-14 00:43:04.861117 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:43:04.863871 | orchestrator | 2025-04-14 00:43:04.865405 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-14 00:43:04.865713 | orchestrator | Monday 14 April 2025 00:43:04 +0000 (0:00:00.153) 0:00:13.742 ********** 2025-04-14 00:43:05.022407 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:43:05.023134 | orchestrator | 2025-04-14 00:43:05.023197 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-14 00:43:05.023223 | orchestrator | Monday 14 April 2025 00:43:05 +0000 (0:00:00.152) 0:00:13.894 ********** 2025-04-14 00:43:05.205717 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:05.206544 | orchestrator | 2025-04-14 00:43:05.206707 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-14 00:43:05.209910 | orchestrator | Monday 14 April 2025 00:43:05 +0000 (0:00:00.195) 0:00:14.089 ********** 2025-04-14 00:43:05.386644 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:05.386822 | orchestrator | 2025-04-14 00:43:05.388170 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-14 00:43:05.779495 | orchestrator | Monday 14 April 2025 00:43:05 +0000 (0:00:00.177) 0:00:14.267 ********** 2025-04-14 00:43:05.780465 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:05.969727 | orchestrator | 2025-04-14 00:43:05.969850 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-14 00:43:05.969872 | orchestrator | Monday 14 April 2025 00:43:05 +0000 (0:00:00.395) 0:00:14.663 ********** 2025-04-14 00:43:05.969904 | orchestrator | ok: [testbed-node-3] => { 2025-04-14 00:43:05.970579 | orchestrator |  "ceph_osd_devices": { 2025-04-14 00:43:05.970734 | orchestrator |  "sdb": { 2025-04-14 00:43:05.971299 | orchestrator |  "osd_lvm_uuid": "010b5855-d3d9-5348-85e9-2943091c3a59" 2025-04-14 00:43:05.973799 | orchestrator |  }, 2025-04-14 00:43:05.975378 | orchestrator |  "sdc": { 2025-04-14 00:43:05.975428 | orchestrator |  "osd_lvm_uuid": "47a37963-cc76-524e-bf57-deb935e0a7e9" 2025-04-14 00:43:05.978710 | orchestrator |  } 2025-04-14 00:43:05.978771 | orchestrator |  } 2025-04-14 00:43:05.978800 | orchestrator | } 2025-04-14 00:43:06.255702 | orchestrator | 2025-04-14 00:43:06.255822 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-14 00:43:06.255842 | orchestrator | Monday 14 April 2025 00:43:05 +0000 (0:00:00.190) 0:00:14.853 ********** 2025-04-14 00:43:06.255874 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:06.258882 | orchestrator | 2025-04-14 00:43:06.259013 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-14 00:43:06.260574 | orchestrator | Monday 14 April 2025 00:43:06 +0000 (0:00:00.280) 0:00:15.134 ********** 2025-04-14 00:43:06.453747 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:06.453925 | orchestrator | 2025-04-14 00:43:06.453950 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-14 00:43:06.594818 | orchestrator | Monday 14 April 2025 00:43:06 +0000 (0:00:00.204) 0:00:15.338 ********** 2025-04-14 00:43:06.594937 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:43:06.597572 | orchestrator | 2025-04-14 00:43:06.597623 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-14 00:43:06.597672 | orchestrator | Monday 14 April 2025 00:43:06 +0000 (0:00:00.138) 0:00:15.477 ********** 2025-04-14 00:43:06.872967 | orchestrator | changed: [testbed-node-3] => { 2025-04-14 00:43:06.873218 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-14 00:43:06.875042 | orchestrator |  "ceph_osd_devices": { 2025-04-14 00:43:06.875080 | orchestrator |  "sdb": { 2025-04-14 00:43:06.875210 | orchestrator |  "osd_lvm_uuid": "010b5855-d3d9-5348-85e9-2943091c3a59" 2025-04-14 00:43:06.875482 | orchestrator |  }, 2025-04-14 00:43:06.875791 | orchestrator |  "sdc": { 2025-04-14 00:43:06.876122 | orchestrator |  "osd_lvm_uuid": "47a37963-cc76-524e-bf57-deb935e0a7e9" 2025-04-14 00:43:06.876388 | orchestrator |  } 2025-04-14 00:43:06.876650 | orchestrator |  }, 2025-04-14 00:43:06.877104 | orchestrator |  "lvm_volumes": [ 2025-04-14 00:43:06.877953 | orchestrator |  { 2025-04-14 00:43:06.878737 | orchestrator |  "data": "osd-block-010b5855-d3d9-5348-85e9-2943091c3a59", 2025-04-14 00:43:06.881594 | orchestrator |  "data_vg": "ceph-010b5855-d3d9-5348-85e9-2943091c3a59" 2025-04-14 00:43:06.882866 | orchestrator |  }, 2025-04-14 00:43:06.882921 | orchestrator |  { 2025-04-14 00:43:06.882938 | orchestrator |  "data": "osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9", 2025-04-14 00:43:06.882963 | orchestrator |  "data_vg": "ceph-47a37963-cc76-524e-bf57-deb935e0a7e9" 2025-04-14 00:43:06.883446 | orchestrator |  } 2025-04-14 00:43:06.883913 | orchestrator |  ] 2025-04-14 00:43:06.886223 | orchestrator |  } 2025-04-14 00:43:06.887393 | orchestrator | } 2025-04-14 00:43:06.888865 | orchestrator | 2025-04-14 00:43:06.890308 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-14 00:43:06.891378 | orchestrator | Monday 14 April 2025 00:43:06 +0000 (0:00:00.280) 0:00:15.757 ********** 2025-04-14 00:43:09.211720 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-14 00:43:09.212246 | orchestrator | 2025-04-14 00:43:09.214690 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-14 00:43:09.217753 | orchestrator | 2025-04-14 00:43:09.220380 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-14 00:43:09.220755 | orchestrator | Monday 14 April 2025 00:43:09 +0000 (0:00:02.336) 0:00:18.094 ********** 2025-04-14 00:43:09.466944 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-14 00:43:09.467396 | orchestrator | 2025-04-14 00:43:09.468531 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-14 00:43:09.470497 | orchestrator | Monday 14 April 2025 00:43:09 +0000 (0:00:00.256) 0:00:18.350 ********** 2025-04-14 00:43:09.745608 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:43:09.748128 | orchestrator | 2025-04-14 00:43:09.748199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:09.748224 | orchestrator | Monday 14 April 2025 00:43:09 +0000 (0:00:00.275) 0:00:18.626 ********** 2025-04-14 00:43:10.182079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-14 00:43:10.182242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-14 00:43:10.182392 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-14 00:43:10.182416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-14 00:43:10.182568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-14 00:43:10.182914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-14 00:43:10.183561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-14 00:43:10.186633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-14 00:43:10.186788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-14 00:43:10.186819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-14 00:43:10.187866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-14 00:43:10.188354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-14 00:43:10.189525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-14 00:43:10.189933 | orchestrator | 2025-04-14 00:43:10.190268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:10.190934 | orchestrator | Monday 14 April 2025 00:43:10 +0000 (0:00:00.438) 0:00:19.064 ********** 2025-04-14 00:43:10.414767 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:10.414963 | orchestrator | 2025-04-14 00:43:10.414992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:10.415237 | orchestrator | Monday 14 April 2025 00:43:10 +0000 (0:00:00.231) 0:00:19.295 ********** 2025-04-14 00:43:10.636214 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:10.636488 | orchestrator | 2025-04-14 00:43:10.636525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:10.637044 | orchestrator | Monday 14 April 2025 00:43:10 +0000 (0:00:00.224) 0:00:19.520 ********** 2025-04-14 00:43:10.862110 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:10.862930 | orchestrator | 2025-04-14 00:43:11.458666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:11.458787 | orchestrator | Monday 14 April 2025 00:43:10 +0000 (0:00:00.225) 0:00:19.746 ********** 2025-04-14 00:43:11.458851 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:11.460955 | orchestrator | 2025-04-14 00:43:11.462001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:11.463282 | orchestrator | Monday 14 April 2025 00:43:11 +0000 (0:00:00.593) 0:00:20.339 ********** 2025-04-14 00:43:11.678879 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:11.679730 | orchestrator | 2025-04-14 00:43:11.679992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:11.681943 | orchestrator | Monday 14 April 2025 00:43:11 +0000 (0:00:00.222) 0:00:20.562 ********** 2025-04-14 00:43:11.926390 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:11.927429 | orchestrator | 2025-04-14 00:43:11.927866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:11.929452 | orchestrator | Monday 14 April 2025 00:43:11 +0000 (0:00:00.247) 0:00:20.809 ********** 2025-04-14 00:43:12.135380 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:12.136921 | orchestrator | 2025-04-14 00:43:12.137491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:12.142004 | orchestrator | Monday 14 April 2025 00:43:12 +0000 (0:00:00.208) 0:00:21.018 ********** 2025-04-14 00:43:12.354309 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:12.356310 | orchestrator | 2025-04-14 00:43:12.357276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:12.359625 | orchestrator | Monday 14 April 2025 00:43:12 +0000 (0:00:00.214) 0:00:21.233 ********** 2025-04-14 00:43:12.798859 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12) 2025-04-14 00:43:12.800550 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12) 2025-04-14 00:43:12.802587 | orchestrator | 2025-04-14 00:43:12.803651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:12.805119 | orchestrator | Monday 14 April 2025 00:43:12 +0000 (0:00:00.447) 0:00:21.681 ********** 2025-04-14 00:43:13.269440 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_676c1686-7068-4aa0-a437-1ca2ad657cc9) 2025-04-14 00:43:13.271457 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_676c1686-7068-4aa0-a437-1ca2ad657cc9) 2025-04-14 00:43:13.274577 | orchestrator | 2025-04-14 00:43:13.276142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:13.277285 | orchestrator | Monday 14 April 2025 00:43:13 +0000 (0:00:00.471) 0:00:22.152 ********** 2025-04-14 00:43:13.730788 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64225693-fc38-404b-a874-78411dc3466d) 2025-04-14 00:43:13.731810 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64225693-fc38-404b-a874-78411dc3466d) 2025-04-14 00:43:13.735257 | orchestrator | 2025-04-14 00:43:13.736018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:13.737623 | orchestrator | Monday 14 April 2025 00:43:13 +0000 (0:00:00.462) 0:00:22.614 ********** 2025-04-14 00:43:14.390305 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bda45bef-0c7e-4642-a586-327a75973f57) 2025-04-14 00:43:14.390489 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bda45bef-0c7e-4642-a586-327a75973f57) 2025-04-14 00:43:14.393774 | orchestrator | 2025-04-14 00:43:14.393812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:14.394499 | orchestrator | Monday 14 April 2025 00:43:14 +0000 (0:00:00.657) 0:00:23.271 ********** 2025-04-14 00:43:15.191229 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-14 00:43:15.191450 | orchestrator | 2025-04-14 00:43:15.193877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:15.194968 | orchestrator | Monday 14 April 2025 00:43:15 +0000 (0:00:00.801) 0:00:24.073 ********** 2025-04-14 00:43:15.598500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-14 00:43:15.598701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-14 00:43:15.600056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-14 00:43:15.600995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-14 00:43:15.602101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-14 00:43:15.603254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-14 00:43:15.603941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-14 00:43:15.604826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-14 00:43:15.605385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-14 00:43:15.606607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-14 00:43:15.607489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-14 00:43:15.607988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-14 00:43:15.608369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-14 00:43:15.608773 | orchestrator | 2025-04-14 00:43:15.609584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:15.609836 | orchestrator | Monday 14 April 2025 00:43:15 +0000 (0:00:00.406) 0:00:24.480 ********** 2025-04-14 00:43:15.832746 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:15.832967 | orchestrator | 2025-04-14 00:43:15.833003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:15.833061 | orchestrator | Monday 14 April 2025 00:43:15 +0000 (0:00:00.234) 0:00:24.714 ********** 2025-04-14 00:43:16.041108 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:16.042326 | orchestrator | 2025-04-14 00:43:16.043336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:16.043562 | orchestrator | Monday 14 April 2025 00:43:16 +0000 (0:00:00.209) 0:00:24.923 ********** 2025-04-14 00:43:16.245206 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:16.245904 | orchestrator | 2025-04-14 00:43:16.246967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:16.247677 | orchestrator | Monday 14 April 2025 00:43:16 +0000 (0:00:00.204) 0:00:25.128 ********** 2025-04-14 00:43:16.446681 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:16.447210 | orchestrator | 2025-04-14 00:43:16.447906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:16.448582 | orchestrator | Monday 14 April 2025 00:43:16 +0000 (0:00:00.201) 0:00:25.329 ********** 2025-04-14 00:43:16.652925 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:16.654473 | orchestrator | 2025-04-14 00:43:16.657463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:16.658564 | orchestrator | Monday 14 April 2025 00:43:16 +0000 (0:00:00.205) 0:00:25.535 ********** 2025-04-14 00:43:16.886530 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:16.887768 | orchestrator | 2025-04-14 00:43:16.888543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:17.081819 | orchestrator | Monday 14 April 2025 00:43:16 +0000 (0:00:00.234) 0:00:25.770 ********** 2025-04-14 00:43:17.081947 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:17.084123 | orchestrator | 2025-04-14 00:43:17.085613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:17.085841 | orchestrator | Monday 14 April 2025 00:43:17 +0000 (0:00:00.195) 0:00:25.965 ********** 2025-04-14 00:43:17.283218 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:17.283872 | orchestrator | 2025-04-14 00:43:17.284815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:17.287224 | orchestrator | Monday 14 April 2025 00:43:17 +0000 (0:00:00.201) 0:00:26.166 ********** 2025-04-14 00:43:18.362365 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-14 00:43:18.365299 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-14 00:43:18.365841 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-14 00:43:18.366880 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-14 00:43:18.367805 | orchestrator | 2025-04-14 00:43:18.368392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:18.369233 | orchestrator | Monday 14 April 2025 00:43:18 +0000 (0:00:01.077) 0:00:27.244 ********** 2025-04-14 00:43:18.568244 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:18.568840 | orchestrator | 2025-04-14 00:43:18.569449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:18.570542 | orchestrator | Monday 14 April 2025 00:43:18 +0000 (0:00:00.206) 0:00:27.451 ********** 2025-04-14 00:43:18.759236 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:18.760556 | orchestrator | 2025-04-14 00:43:18.761979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:18.764209 | orchestrator | Monday 14 April 2025 00:43:18 +0000 (0:00:00.192) 0:00:27.643 ********** 2025-04-14 00:43:18.967392 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:18.967858 | orchestrator | 2025-04-14 00:43:18.968634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:18.969166 | orchestrator | Monday 14 April 2025 00:43:18 +0000 (0:00:00.207) 0:00:27.850 ********** 2025-04-14 00:43:19.177605 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:19.180469 | orchestrator | 2025-04-14 00:43:19.180602 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-14 00:43:19.377904 | orchestrator | Monday 14 April 2025 00:43:19 +0000 (0:00:00.208) 0:00:28.059 ********** 2025-04-14 00:43:19.378128 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-04-14 00:43:19.378458 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-04-14 00:43:19.379209 | orchestrator | 2025-04-14 00:43:19.380134 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-14 00:43:19.381000 | orchestrator | Monday 14 April 2025 00:43:19 +0000 (0:00:00.202) 0:00:28.261 ********** 2025-04-14 00:43:19.526860 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:19.703531 | orchestrator | 2025-04-14 00:43:19.703653 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-14 00:43:19.703673 | orchestrator | Monday 14 April 2025 00:43:19 +0000 (0:00:00.146) 0:00:28.407 ********** 2025-04-14 00:43:19.703705 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:19.703812 | orchestrator | 2025-04-14 00:43:19.704277 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-14 00:43:19.705073 | orchestrator | Monday 14 April 2025 00:43:19 +0000 (0:00:00.179) 0:00:28.586 ********** 2025-04-14 00:43:19.848066 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:19.848364 | orchestrator | 2025-04-14 00:43:19.848418 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-14 00:43:19.852062 | orchestrator | Monday 14 April 2025 00:43:19 +0000 (0:00:00.143) 0:00:28.730 ********** 2025-04-14 00:43:20.001348 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:43:20.002434 | orchestrator | 2025-04-14 00:43:20.003695 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-14 00:43:20.004912 | orchestrator | Monday 14 April 2025 00:43:19 +0000 (0:00:00.154) 0:00:28.884 ********** 2025-04-14 00:43:20.217519 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89320cc7-f853-5314-9a76-744a2d019bd6'}}) 2025-04-14 00:43:20.219129 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}}) 2025-04-14 00:43:20.220660 | orchestrator | 2025-04-14 00:43:20.221915 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-14 00:43:20.224675 | orchestrator | Monday 14 April 2025 00:43:20 +0000 (0:00:00.216) 0:00:29.100 ********** 2025-04-14 00:43:20.625593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89320cc7-f853-5314-9a76-744a2d019bd6'}})  2025-04-14 00:43:20.629252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}})  2025-04-14 00:43:20.630243 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:20.630308 | orchestrator | 2025-04-14 00:43:20.630365 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-14 00:43:20.630424 | orchestrator | Monday 14 April 2025 00:43:20 +0000 (0:00:00.406) 0:00:29.507 ********** 2025-04-14 00:43:20.804243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89320cc7-f853-5314-9a76-744a2d019bd6'}})  2025-04-14 00:43:20.808567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}})  2025-04-14 00:43:20.809646 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:20.809691 | orchestrator | 2025-04-14 00:43:20.810617 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-14 00:43:20.810947 | orchestrator | Monday 14 April 2025 00:43:20 +0000 (0:00:00.179) 0:00:29.687 ********** 2025-04-14 00:43:20.980548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89320cc7-f853-5314-9a76-744a2d019bd6'}})  2025-04-14 00:43:20.980813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}})  2025-04-14 00:43:20.981864 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:20.982697 | orchestrator | 2025-04-14 00:43:20.982978 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-14 00:43:20.983700 | orchestrator | Monday 14 April 2025 00:43:20 +0000 (0:00:00.176) 0:00:29.864 ********** 2025-04-14 00:43:21.132814 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:43:21.133810 | orchestrator | 2025-04-14 00:43:21.136803 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-14 00:43:21.136862 | orchestrator | Monday 14 April 2025 00:43:21 +0000 (0:00:00.151) 0:00:30.016 ********** 2025-04-14 00:43:21.282941 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:43:21.283595 | orchestrator | 2025-04-14 00:43:21.284790 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-14 00:43:21.287020 | orchestrator | Monday 14 April 2025 00:43:21 +0000 (0:00:00.150) 0:00:30.166 ********** 2025-04-14 00:43:21.424932 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:21.426171 | orchestrator | 2025-04-14 00:43:21.426814 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-14 00:43:21.429102 | orchestrator | Monday 14 April 2025 00:43:21 +0000 (0:00:00.141) 0:00:30.308 ********** 2025-04-14 00:43:21.607764 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:21.608369 | orchestrator | 2025-04-14 00:43:21.609453 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-14 00:43:21.610883 | orchestrator | Monday 14 April 2025 00:43:21 +0000 (0:00:00.181) 0:00:30.489 ********** 2025-04-14 00:43:21.743580 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:21.744366 | orchestrator | 2025-04-14 00:43:21.745542 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-14 00:43:21.746784 | orchestrator | Monday 14 April 2025 00:43:21 +0000 (0:00:00.138) 0:00:30.627 ********** 2025-04-14 00:43:21.895779 | orchestrator | ok: [testbed-node-4] => { 2025-04-14 00:43:21.897271 | orchestrator |  "ceph_osd_devices": { 2025-04-14 00:43:21.898447 | orchestrator |  "sdb": { 2025-04-14 00:43:21.899508 | orchestrator |  "osd_lvm_uuid": "89320cc7-f853-5314-9a76-744a2d019bd6" 2025-04-14 00:43:21.900793 | orchestrator |  }, 2025-04-14 00:43:21.901997 | orchestrator |  "sdc": { 2025-04-14 00:43:21.904286 | orchestrator |  "osd_lvm_uuid": "a8cf203b-da46-5fbb-85f7-5c1db9738ebe" 2025-04-14 00:43:21.905529 | orchestrator |  } 2025-04-14 00:43:21.905942 | orchestrator |  } 2025-04-14 00:43:21.906306 | orchestrator | } 2025-04-14 00:43:21.906967 | orchestrator | 2025-04-14 00:43:21.907250 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-14 00:43:21.907635 | orchestrator | Monday 14 April 2025 00:43:21 +0000 (0:00:00.151) 0:00:30.779 ********** 2025-04-14 00:43:22.039723 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:22.041301 | orchestrator | 2025-04-14 00:43:22.177391 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-14 00:43:22.177470 | orchestrator | Monday 14 April 2025 00:43:22 +0000 (0:00:00.143) 0:00:30.922 ********** 2025-04-14 00:43:22.177499 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:22.178908 | orchestrator | 2025-04-14 00:43:22.179817 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-14 00:43:22.180265 | orchestrator | Monday 14 April 2025 00:43:22 +0000 (0:00:00.139) 0:00:31.061 ********** 2025-04-14 00:43:22.318550 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:43:22.319310 | orchestrator | 2025-04-14 00:43:22.320119 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-14 00:43:22.320771 | orchestrator | Monday 14 April 2025 00:43:22 +0000 (0:00:00.139) 0:00:31.201 ********** 2025-04-14 00:43:22.827761 | orchestrator | changed: [testbed-node-4] => { 2025-04-14 00:43:22.829054 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-14 00:43:22.831275 | orchestrator |  "ceph_osd_devices": { 2025-04-14 00:43:22.831442 | orchestrator |  "sdb": { 2025-04-14 00:43:22.831502 | orchestrator |  "osd_lvm_uuid": "89320cc7-f853-5314-9a76-744a2d019bd6" 2025-04-14 00:43:22.832451 | orchestrator |  }, 2025-04-14 00:43:22.833803 | orchestrator |  "sdc": { 2025-04-14 00:43:22.834816 | orchestrator |  "osd_lvm_uuid": "a8cf203b-da46-5fbb-85f7-5c1db9738ebe" 2025-04-14 00:43:22.835809 | orchestrator |  } 2025-04-14 00:43:22.836685 | orchestrator |  }, 2025-04-14 00:43:22.837492 | orchestrator |  "lvm_volumes": [ 2025-04-14 00:43:22.839010 | orchestrator |  { 2025-04-14 00:43:22.839995 | orchestrator |  "data": "osd-block-89320cc7-f853-5314-9a76-744a2d019bd6", 2025-04-14 00:43:22.841404 | orchestrator |  "data_vg": "ceph-89320cc7-f853-5314-9a76-744a2d019bd6" 2025-04-14 00:43:22.841915 | orchestrator |  }, 2025-04-14 00:43:22.842691 | orchestrator |  { 2025-04-14 00:43:22.843290 | orchestrator |  "data": "osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe", 2025-04-14 00:43:22.843501 | orchestrator |  "data_vg": "ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe" 2025-04-14 00:43:22.844109 | orchestrator |  } 2025-04-14 00:43:22.844746 | orchestrator |  ] 2025-04-14 00:43:22.845050 | orchestrator |  } 2025-04-14 00:43:22.846068 | orchestrator | } 2025-04-14 00:43:22.846407 | orchestrator | 2025-04-14 00:43:22.846915 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-14 00:43:22.847464 | orchestrator | Monday 14 April 2025 00:43:22 +0000 (0:00:00.508) 0:00:31.709 ********** 2025-04-14 00:43:24.205516 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-14 00:43:24.207471 | orchestrator | 2025-04-14 00:43:24.209573 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-04-14 00:43:24.210543 | orchestrator | 2025-04-14 00:43:24.212454 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-14 00:43:24.213808 | orchestrator | Monday 14 April 2025 00:43:24 +0000 (0:00:01.377) 0:00:33.087 ********** 2025-04-14 00:43:24.445838 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-14 00:43:24.446195 | orchestrator | 2025-04-14 00:43:24.446746 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-14 00:43:24.447042 | orchestrator | Monday 14 April 2025 00:43:24 +0000 (0:00:00.242) 0:00:33.329 ********** 2025-04-14 00:43:25.043811 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:43:25.043937 | orchestrator | 2025-04-14 00:43:25.045398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:25.045984 | orchestrator | Monday 14 April 2025 00:43:25 +0000 (0:00:00.596) 0:00:33.926 ********** 2025-04-14 00:43:25.459435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-14 00:43:25.459615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-14 00:43:25.462659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-14 00:43:25.463746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-14 00:43:25.463773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-14 00:43:25.463792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-14 00:43:25.464109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-14 00:43:25.464782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-14 00:43:25.465216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-14 00:43:25.465532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-14 00:43:25.466355 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-14 00:43:25.466572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-14 00:43:25.467243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-14 00:43:25.467553 | orchestrator | 2025-04-14 00:43:25.468079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:25.468441 | orchestrator | Monday 14 April 2025 00:43:25 +0000 (0:00:00.415) 0:00:34.341 ********** 2025-04-14 00:43:25.724745 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:25.725983 | orchestrator | 2025-04-14 00:43:25.726338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:25.727311 | orchestrator | Monday 14 April 2025 00:43:25 +0000 (0:00:00.266) 0:00:34.608 ********** 2025-04-14 00:43:25.980864 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:25.981865 | orchestrator | 2025-04-14 00:43:25.981932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:25.982560 | orchestrator | Monday 14 April 2025 00:43:25 +0000 (0:00:00.245) 0:00:34.854 ********** 2025-04-14 00:43:26.185213 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:26.186313 | orchestrator | 2025-04-14 00:43:26.186378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:26.186961 | orchestrator | Monday 14 April 2025 00:43:26 +0000 (0:00:00.214) 0:00:35.069 ********** 2025-04-14 00:43:26.387322 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:26.387758 | orchestrator | 2025-04-14 00:43:26.388748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:26.389559 | orchestrator | Monday 14 April 2025 00:43:26 +0000 (0:00:00.202) 0:00:35.271 ********** 2025-04-14 00:43:26.596423 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:26.597515 | orchestrator | 2025-04-14 00:43:26.598376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:26.599154 | orchestrator | Monday 14 April 2025 00:43:26 +0000 (0:00:00.207) 0:00:35.478 ********** 2025-04-14 00:43:26.796368 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:26.796620 | orchestrator | 2025-04-14 00:43:26.797015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:26.798487 | orchestrator | Monday 14 April 2025 00:43:26 +0000 (0:00:00.201) 0:00:35.679 ********** 2025-04-14 00:43:26.997912 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:26.998843 | orchestrator | 2025-04-14 00:43:26.999905 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:27.000646 | orchestrator | Monday 14 April 2025 00:43:26 +0000 (0:00:00.199) 0:00:35.879 ********** 2025-04-14 00:43:27.214954 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:27.215618 | orchestrator | 2025-04-14 00:43:27.216337 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:27.217534 | orchestrator | Monday 14 April 2025 00:43:27 +0000 (0:00:00.218) 0:00:36.098 ********** 2025-04-14 00:43:27.857242 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2) 2025-04-14 00:43:27.860695 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2) 2025-04-14 00:43:27.861396 | orchestrator | 2025-04-14 00:43:27.861442 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:27.862813 | orchestrator | Monday 14 April 2025 00:43:27 +0000 (0:00:00.637) 0:00:36.736 ********** 2025-04-14 00:43:28.301851 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4f96d1f1-65aa-443a-b2b5-a30371495496) 2025-04-14 00:43:28.303042 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4f96d1f1-65aa-443a-b2b5-a30371495496) 2025-04-14 00:43:28.304426 | orchestrator | 2025-04-14 00:43:28.305462 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:28.307384 | orchestrator | Monday 14 April 2025 00:43:28 +0000 (0:00:00.447) 0:00:37.184 ********** 2025-04-14 00:43:28.776246 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3) 2025-04-14 00:43:28.777481 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3) 2025-04-14 00:43:28.778749 | orchestrator | 2025-04-14 00:43:28.779749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:28.780761 | orchestrator | Monday 14 April 2025 00:43:28 +0000 (0:00:00.473) 0:00:37.657 ********** 2025-04-14 00:43:29.248523 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_03a3c0ae-ae5b-4103-947a-830f0553055f) 2025-04-14 00:43:29.248776 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_03a3c0ae-ae5b-4103-947a-830f0553055f) 2025-04-14 00:43:29.248805 | orchestrator | 2025-04-14 00:43:29.248825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:43:29.249002 | orchestrator | Monday 14 April 2025 00:43:29 +0000 (0:00:00.474) 0:00:38.132 ********** 2025-04-14 00:43:29.598297 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-14 00:43:29.599167 | orchestrator | 2025-04-14 00:43:29.599364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:29.600204 | orchestrator | Monday 14 April 2025 00:43:29 +0000 (0:00:00.349) 0:00:38.482 ********** 2025-04-14 00:43:30.080757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-14 00:43:30.080923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-14 00:43:30.081329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-14 00:43:30.082891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-14 00:43:30.084118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-14 00:43:30.085057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-14 00:43:30.085917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-14 00:43:30.086895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-14 00:43:30.087897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-14 00:43:30.088494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-14 00:43:30.089191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-14 00:43:30.089468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-14 00:43:30.089808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-14 00:43:30.090761 | orchestrator | 2025-04-14 00:43:30.090860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:30.091604 | orchestrator | Monday 14 April 2025 00:43:30 +0000 (0:00:00.481) 0:00:38.963 ********** 2025-04-14 00:43:30.281807 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:30.282375 | orchestrator | 2025-04-14 00:43:30.282735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:30.283492 | orchestrator | Monday 14 April 2025 00:43:30 +0000 (0:00:00.202) 0:00:39.165 ********** 2025-04-14 00:43:30.500883 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:30.501573 | orchestrator | 2025-04-14 00:43:30.502305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:30.503282 | orchestrator | Monday 14 April 2025 00:43:30 +0000 (0:00:00.218) 0:00:39.384 ********** 2025-04-14 00:43:30.701098 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:30.701303 | orchestrator | 2025-04-14 00:43:30.701337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:30.701679 | orchestrator | Monday 14 April 2025 00:43:30 +0000 (0:00:00.198) 0:00:39.583 ********** 2025-04-14 00:43:31.331662 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:31.332678 | orchestrator | 2025-04-14 00:43:31.333948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:31.336379 | orchestrator | Monday 14 April 2025 00:43:31 +0000 (0:00:00.631) 0:00:40.215 ********** 2025-04-14 00:43:31.558983 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:31.559474 | orchestrator | 2025-04-14 00:43:31.560594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:31.561578 | orchestrator | Monday 14 April 2025 00:43:31 +0000 (0:00:00.227) 0:00:40.442 ********** 2025-04-14 00:43:31.770749 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:31.770922 | orchestrator | 2025-04-14 00:43:31.773578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:31.774080 | orchestrator | Monday 14 April 2025 00:43:31 +0000 (0:00:00.209) 0:00:40.652 ********** 2025-04-14 00:43:31.981356 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:31.984286 | orchestrator | 2025-04-14 00:43:31.986319 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:32.196642 | orchestrator | Monday 14 April 2025 00:43:31 +0000 (0:00:00.209) 0:00:40.862 ********** 2025-04-14 00:43:32.196801 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:32.198483 | orchestrator | 2025-04-14 00:43:32.199192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:32.200384 | orchestrator | Monday 14 April 2025 00:43:32 +0000 (0:00:00.215) 0:00:41.078 ********** 2025-04-14 00:43:32.876572 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-14 00:43:32.876753 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-14 00:43:32.877979 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-14 00:43:32.879200 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-14 00:43:32.880094 | orchestrator | 2025-04-14 00:43:32.880519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:32.881410 | orchestrator | Monday 14 April 2025 00:43:32 +0000 (0:00:00.678) 0:00:41.757 ********** 2025-04-14 00:43:33.080834 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:33.082229 | orchestrator | 2025-04-14 00:43:33.082280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:33.082967 | orchestrator | Monday 14 April 2025 00:43:33 +0000 (0:00:00.206) 0:00:41.963 ********** 2025-04-14 00:43:33.287041 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:33.288384 | orchestrator | 2025-04-14 00:43:33.289516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:33.290731 | orchestrator | Monday 14 April 2025 00:43:33 +0000 (0:00:00.207) 0:00:42.171 ********** 2025-04-14 00:43:33.512047 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:33.514193 | orchestrator | 2025-04-14 00:43:33.515323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:43:33.516379 | orchestrator | Monday 14 April 2025 00:43:33 +0000 (0:00:00.224) 0:00:42.395 ********** 2025-04-14 00:43:33.754453 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:33.757274 | orchestrator | 2025-04-14 00:43:33.758552 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-04-14 00:43:33.759629 | orchestrator | Monday 14 April 2025 00:43:33 +0000 (0:00:00.240) 0:00:42.636 ********** 2025-04-14 00:43:34.147969 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-04-14 00:43:34.149014 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-04-14 00:43:34.149729 | orchestrator | 2025-04-14 00:43:34.151112 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-04-14 00:43:34.151952 | orchestrator | Monday 14 April 2025 00:43:34 +0000 (0:00:00.390) 0:00:43.026 ********** 2025-04-14 00:43:34.293199 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:34.294289 | orchestrator | 2025-04-14 00:43:34.297775 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-04-14 00:43:34.438189 | orchestrator | Monday 14 April 2025 00:43:34 +0000 (0:00:00.148) 0:00:43.175 ********** 2025-04-14 00:43:34.438333 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:34.438421 | orchestrator | 2025-04-14 00:43:34.438900 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-04-14 00:43:34.440033 | orchestrator | Monday 14 April 2025 00:43:34 +0000 (0:00:00.146) 0:00:43.321 ********** 2025-04-14 00:43:34.586296 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:34.586519 | orchestrator | 2025-04-14 00:43:34.587893 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-04-14 00:43:34.588253 | orchestrator | Monday 14 April 2025 00:43:34 +0000 (0:00:00.147) 0:00:43.469 ********** 2025-04-14 00:43:34.747231 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:43:34.749565 | orchestrator | 2025-04-14 00:43:34.749632 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-04-14 00:43:34.749874 | orchestrator | Monday 14 April 2025 00:43:34 +0000 (0:00:00.158) 0:00:43.628 ********** 2025-04-14 00:43:34.947603 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3f558b9-064d-5710-baa4-8e41f44a2baf'}}) 2025-04-14 00:43:34.949618 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}}) 2025-04-14 00:43:34.949842 | orchestrator | 2025-04-14 00:43:34.952869 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-04-14 00:43:34.953533 | orchestrator | Monday 14 April 2025 00:43:34 +0000 (0:00:00.202) 0:00:43.831 ********** 2025-04-14 00:43:35.118648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3f558b9-064d-5710-baa4-8e41f44a2baf'}})  2025-04-14 00:43:35.119768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}})  2025-04-14 00:43:35.119813 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:35.121771 | orchestrator | 2025-04-14 00:43:35.123540 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-04-14 00:43:35.124842 | orchestrator | Monday 14 April 2025 00:43:35 +0000 (0:00:00.170) 0:00:44.001 ********** 2025-04-14 00:43:35.314508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3f558b9-064d-5710-baa4-8e41f44a2baf'}})  2025-04-14 00:43:35.315012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}})  2025-04-14 00:43:35.316575 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:35.318230 | orchestrator | 2025-04-14 00:43:35.321285 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-04-14 00:43:35.494737 | orchestrator | Monday 14 April 2025 00:43:35 +0000 (0:00:00.197) 0:00:44.198 ********** 2025-04-14 00:43:35.494868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3f558b9-064d-5710-baa4-8e41f44a2baf'}})  2025-04-14 00:43:35.494966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}})  2025-04-14 00:43:35.497025 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:35.497708 | orchestrator | 2025-04-14 00:43:35.498876 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-04-14 00:43:35.499975 | orchestrator | Monday 14 April 2025 00:43:35 +0000 (0:00:00.177) 0:00:44.376 ********** 2025-04-14 00:43:35.639825 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:43:35.640741 | orchestrator | 2025-04-14 00:43:35.641567 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-04-14 00:43:35.642410 | orchestrator | Monday 14 April 2025 00:43:35 +0000 (0:00:00.146) 0:00:44.523 ********** 2025-04-14 00:43:35.793669 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:43:35.794362 | orchestrator | 2025-04-14 00:43:35.795019 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-04-14 00:43:35.797270 | orchestrator | Monday 14 April 2025 00:43:35 +0000 (0:00:00.153) 0:00:44.676 ********** 2025-04-14 00:43:35.937649 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:35.938403 | orchestrator | 2025-04-14 00:43:35.938456 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-04-14 00:43:35.941085 | orchestrator | Monday 14 April 2025 00:43:35 +0000 (0:00:00.141) 0:00:44.817 ********** 2025-04-14 00:43:36.319600 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:36.321892 | orchestrator | 2025-04-14 00:43:36.321996 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-04-14 00:43:36.322086 | orchestrator | Monday 14 April 2025 00:43:36 +0000 (0:00:00.380) 0:00:45.197 ********** 2025-04-14 00:43:36.464555 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:36.465444 | orchestrator | 2025-04-14 00:43:36.465492 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-04-14 00:43:36.465914 | orchestrator | Monday 14 April 2025 00:43:36 +0000 (0:00:00.149) 0:00:45.347 ********** 2025-04-14 00:43:36.604420 | orchestrator | ok: [testbed-node-5] => { 2025-04-14 00:43:36.604908 | orchestrator |  "ceph_osd_devices": { 2025-04-14 00:43:36.604953 | orchestrator |  "sdb": { 2025-04-14 00:43:36.606209 | orchestrator |  "osd_lvm_uuid": "b3f558b9-064d-5710-baa4-8e41f44a2baf" 2025-04-14 00:43:36.606949 | orchestrator |  }, 2025-04-14 00:43:36.607896 | orchestrator |  "sdc": { 2025-04-14 00:43:36.609947 | orchestrator |  "osd_lvm_uuid": "1e3b39ff-ab1d-556f-9f1e-d127c66e789a" 2025-04-14 00:43:36.610617 | orchestrator |  } 2025-04-14 00:43:36.610667 | orchestrator |  } 2025-04-14 00:43:36.610691 | orchestrator | } 2025-04-14 00:43:36.610959 | orchestrator | 2025-04-14 00:43:36.611632 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-04-14 00:43:36.612469 | orchestrator | Monday 14 April 2025 00:43:36 +0000 (0:00:00.141) 0:00:45.488 ********** 2025-04-14 00:43:36.743902 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:36.744814 | orchestrator | 2025-04-14 00:43:36.746228 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-04-14 00:43:36.747804 | orchestrator | Monday 14 April 2025 00:43:36 +0000 (0:00:00.138) 0:00:45.627 ********** 2025-04-14 00:43:36.927294 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:36.927601 | orchestrator | 2025-04-14 00:43:36.928805 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-04-14 00:43:36.929893 | orchestrator | Monday 14 April 2025 00:43:36 +0000 (0:00:00.183) 0:00:45.810 ********** 2025-04-14 00:43:37.072793 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:43:37.073021 | orchestrator | 2025-04-14 00:43:37.073543 | orchestrator | TASK [Print configuration data] ************************************************ 2025-04-14 00:43:37.074472 | orchestrator | Monday 14 April 2025 00:43:37 +0000 (0:00:00.144) 0:00:45.954 ********** 2025-04-14 00:43:37.359898 | orchestrator | changed: [testbed-node-5] => { 2025-04-14 00:43:37.360213 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-04-14 00:43:37.361932 | orchestrator |  "ceph_osd_devices": { 2025-04-14 00:43:37.364977 | orchestrator |  "sdb": { 2025-04-14 00:43:37.365115 | orchestrator |  "osd_lvm_uuid": "b3f558b9-064d-5710-baa4-8e41f44a2baf" 2025-04-14 00:43:37.365831 | orchestrator |  }, 2025-04-14 00:43:37.366248 | orchestrator |  "sdc": { 2025-04-14 00:43:37.366932 | orchestrator |  "osd_lvm_uuid": "1e3b39ff-ab1d-556f-9f1e-d127c66e789a" 2025-04-14 00:43:37.367820 | orchestrator |  } 2025-04-14 00:43:37.368298 | orchestrator |  }, 2025-04-14 00:43:37.368329 | orchestrator |  "lvm_volumes": [ 2025-04-14 00:43:37.368972 | orchestrator |  { 2025-04-14 00:43:37.369886 | orchestrator |  "data": "osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf", 2025-04-14 00:43:37.370076 | orchestrator |  "data_vg": "ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf" 2025-04-14 00:43:37.370517 | orchestrator |  }, 2025-04-14 00:43:37.371204 | orchestrator |  { 2025-04-14 00:43:37.371848 | orchestrator |  "data": "osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a", 2025-04-14 00:43:37.372249 | orchestrator |  "data_vg": "ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a" 2025-04-14 00:43:37.372942 | orchestrator |  } 2025-04-14 00:43:37.373336 | orchestrator |  ] 2025-04-14 00:43:37.373849 | orchestrator |  } 2025-04-14 00:43:37.374370 | orchestrator | } 2025-04-14 00:43:37.374972 | orchestrator | 2025-04-14 00:43:37.375238 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-04-14 00:43:37.375653 | orchestrator | Monday 14 April 2025 00:43:37 +0000 (0:00:00.288) 0:00:46.243 ********** 2025-04-14 00:43:38.681445 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-14 00:43:38.682097 | orchestrator | 2025-04-14 00:43:38.682214 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:43:38.682312 | orchestrator | 2025-04-14 00:43:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:43:38.682433 | orchestrator | 2025-04-14 00:43:38 | INFO  | Please wait and do not abort execution. 2025-04-14 00:43:38.682459 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-14 00:43:38.683262 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-14 00:43:38.683437 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-14 00:43:38.683729 | orchestrator | 2025-04-14 00:43:38.684228 | orchestrator | 2025-04-14 00:43:38.685086 | orchestrator | 2025-04-14 00:43:38.685406 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:43:38.686120 | orchestrator | Monday 14 April 2025 00:43:38 +0000 (0:00:01.320) 0:00:47.564 ********** 2025-04-14 00:43:38.687444 | orchestrator | =============================================================================== 2025-04-14 00:43:38.688381 | orchestrator | Write configuration file ------------------------------------------------ 5.03s 2025-04-14 00:43:38.688872 | orchestrator | Add known partitions to the list of available block devices ------------- 1.44s 2025-04-14 00:43:38.689583 | orchestrator | Add known links to the list of available block devices ------------------ 1.38s 2025-04-14 00:43:38.690335 | orchestrator | Get initial list of available block devices ----------------------------- 1.08s 2025-04-14 00:43:38.690581 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-04-14 00:43:38.690826 | orchestrator | Add known partitions to the list of available block devices ------------- 1.08s 2025-04-14 00:43:38.691273 | orchestrator | Print configuration data ------------------------------------------------ 1.08s 2025-04-14 00:43:38.691863 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-04-14 00:43:38.692220 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2025-04-14 00:43:38.692656 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.78s 2025-04-14 00:43:38.693029 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-04-14 00:43:38.693422 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.77s 2025-04-14 00:43:38.693890 | orchestrator | Set WAL devices config data --------------------------------------------- 0.74s 2025-04-14 00:43:38.694213 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2025-04-14 00:43:38.694465 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-04-14 00:43:38.695212 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.68s 2025-04-14 00:43:38.695466 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-04-14 00:43:38.695696 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-04-14 00:43:38.695974 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-04-14 00:43:38.696428 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-04-14 00:43:50.861726 | orchestrator | 2025-04-14 00:43:50 | INFO  | Task 80182b68-4535-4961-a569-3f24ad0a9682 is running in background. Output coming soon. 2025-04-14 00:44:30.281418 | orchestrator | 2025-04-14 00:44:21 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-04-14 00:44:31.987340 | orchestrator | 2025-04-14 00:44:21 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-04-14 00:44:31.987460 | orchestrator | 2025-04-14 00:44:21 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-04-14 00:44:31.987480 | orchestrator | 2025-04-14 00:44:21 | INFO  | Handling group overwrites in 99-overwrite 2025-04-14 00:44:31.987511 | orchestrator | 2025-04-14 00:44:21 | INFO  | Removing group ceph-mds from 50-ceph 2025-04-14 00:44:31.987541 | orchestrator | 2025-04-14 00:44:21 | INFO  | Removing group ceph-rgw from 50-ceph 2025-04-14 00:44:31.987557 | orchestrator | 2025-04-14 00:44:21 | INFO  | Removing group netbird:children from 50-infrastruture 2025-04-14 00:44:31.987572 | orchestrator | 2025-04-14 00:44:21 | INFO  | Removing group storage:children from 50-kolla 2025-04-14 00:44:31.987587 | orchestrator | 2025-04-14 00:44:21 | INFO  | Removing group frr:children from 60-generic 2025-04-14 00:44:31.987601 | orchestrator | 2025-04-14 00:44:21 | INFO  | Handling group overwrites in 20-roles 2025-04-14 00:44:31.987616 | orchestrator | 2025-04-14 00:44:21 | INFO  | Removing group k3s_node from 50-infrastruture 2025-04-14 00:44:31.987630 | orchestrator | 2025-04-14 00:44:22 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-04-14 00:44:31.987644 | orchestrator | 2025-04-14 00:44:30 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-04-14 00:44:31.987677 | orchestrator | 2025-04-14 00:44:31 | INFO  | Task f503f7ba-69d1-446b-a567-59d751e9e949 (ceph-create-lvm-devices) was prepared for execution. 2025-04-14 00:44:34.973523 | orchestrator | 2025-04-14 00:44:31 | INFO  | It takes a moment until task f503f7ba-69d1-446b-a567-59d751e9e949 (ceph-create-lvm-devices) has been started and output is visible here. 2025-04-14 00:44:34.973631 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-14 00:44:35.496021 | orchestrator | 2025-04-14 00:44:35.499739 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-14 00:44:35.501733 | orchestrator | 2025-04-14 00:44:35.502514 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-14 00:44:35.504010 | orchestrator | Monday 14 April 2025 00:44:35 +0000 (0:00:00.444) 0:00:00.444 ********** 2025-04-14 00:44:35.740083 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-04-14 00:44:35.741059 | orchestrator | 2025-04-14 00:44:35.741843 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-14 00:44:35.742930 | orchestrator | Monday 14 April 2025 00:44:35 +0000 (0:00:00.249) 0:00:00.694 ********** 2025-04-14 00:44:35.962384 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:44:35.962607 | orchestrator | 2025-04-14 00:44:35.962635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:35.962659 | orchestrator | Monday 14 April 2025 00:44:35 +0000 (0:00:00.222) 0:00:00.916 ********** 2025-04-14 00:44:36.689400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-04-14 00:44:36.689904 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-04-14 00:44:36.690585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-04-14 00:44:36.691469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-04-14 00:44:36.692523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-04-14 00:44:36.692968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-04-14 00:44:36.693249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-04-14 00:44:36.693712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-04-14 00:44:36.694449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-04-14 00:44:36.695257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-04-14 00:44:36.696009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-04-14 00:44:36.696850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-04-14 00:44:36.697492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-04-14 00:44:36.698253 | orchestrator | 2025-04-14 00:44:36.698964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:36.700184 | orchestrator | Monday 14 April 2025 00:44:36 +0000 (0:00:00.724) 0:00:01.641 ********** 2025-04-14 00:44:36.880376 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:36.880889 | orchestrator | 2025-04-14 00:44:36.882273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:36.882535 | orchestrator | Monday 14 April 2025 00:44:36 +0000 (0:00:00.193) 0:00:01.834 ********** 2025-04-14 00:44:37.094310 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:37.095987 | orchestrator | 2025-04-14 00:44:37.096066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:37.096093 | orchestrator | Monday 14 April 2025 00:44:37 +0000 (0:00:00.211) 0:00:02.045 ********** 2025-04-14 00:44:37.297926 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:37.298879 | orchestrator | 2025-04-14 00:44:37.299827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:37.300780 | orchestrator | Monday 14 April 2025 00:44:37 +0000 (0:00:00.205) 0:00:02.251 ********** 2025-04-14 00:44:37.568354 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:37.568786 | orchestrator | 2025-04-14 00:44:37.570388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:37.571098 | orchestrator | Monday 14 April 2025 00:44:37 +0000 (0:00:00.268) 0:00:02.519 ********** 2025-04-14 00:44:37.776196 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:37.776980 | orchestrator | 2025-04-14 00:44:37.777894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:37.778913 | orchestrator | Monday 14 April 2025 00:44:37 +0000 (0:00:00.210) 0:00:02.730 ********** 2025-04-14 00:44:37.985872 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:37.986220 | orchestrator | 2025-04-14 00:44:37.987445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:37.990304 | orchestrator | Monday 14 April 2025 00:44:37 +0000 (0:00:00.208) 0:00:02.939 ********** 2025-04-14 00:44:38.275685 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:38.275918 | orchestrator | 2025-04-14 00:44:38.276619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:38.279127 | orchestrator | Monday 14 April 2025 00:44:38 +0000 (0:00:00.288) 0:00:03.227 ********** 2025-04-14 00:44:38.490265 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:38.491352 | orchestrator | 2025-04-14 00:44:38.492147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:38.492613 | orchestrator | Monday 14 April 2025 00:44:38 +0000 (0:00:00.216) 0:00:03.443 ********** 2025-04-14 00:44:39.166331 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d) 2025-04-14 00:44:39.166512 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d) 2025-04-14 00:44:39.167165 | orchestrator | 2025-04-14 00:44:39.167574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:39.167998 | orchestrator | Monday 14 April 2025 00:44:39 +0000 (0:00:00.675) 0:00:04.119 ********** 2025-04-14 00:44:39.992593 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c26cfb84-2784-4068-ac39-279abdffc82e) 2025-04-14 00:44:39.996355 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c26cfb84-2784-4068-ac39-279abdffc82e) 2025-04-14 00:44:39.996535 | orchestrator | 2025-04-14 00:44:39.997990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:39.998918 | orchestrator | Monday 14 April 2025 00:44:39 +0000 (0:00:00.824) 0:00:04.944 ********** 2025-04-14 00:44:40.438485 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_938a8574-ab31-4693-953b-ad06db98cc0e) 2025-04-14 00:44:40.438656 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_938a8574-ab31-4693-953b-ad06db98cc0e) 2025-04-14 00:44:40.438687 | orchestrator | 2025-04-14 00:44:40.439478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:40.440065 | orchestrator | Monday 14 April 2025 00:44:40 +0000 (0:00:00.445) 0:00:05.390 ********** 2025-04-14 00:44:40.918713 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0623da07-2b86-4b0f-8ae6-479bebb1d3d2) 2025-04-14 00:44:40.919654 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0623da07-2b86-4b0f-8ae6-479bebb1d3d2) 2025-04-14 00:44:40.920243 | orchestrator | 2025-04-14 00:44:40.923356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:44:41.260588 | orchestrator | Monday 14 April 2025 00:44:40 +0000 (0:00:00.479) 0:00:05.870 ********** 2025-04-14 00:44:41.260723 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-14 00:44:41.261368 | orchestrator | 2025-04-14 00:44:41.262245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:41.262273 | orchestrator | Monday 14 April 2025 00:44:41 +0000 (0:00:00.344) 0:00:06.214 ********** 2025-04-14 00:44:41.741148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-04-14 00:44:41.742432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-04-14 00:44:41.743508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-04-14 00:44:41.744813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-04-14 00:44:41.745764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-04-14 00:44:41.747404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-04-14 00:44:41.748478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-04-14 00:44:41.749055 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-04-14 00:44:41.749720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-04-14 00:44:41.750221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-04-14 00:44:41.750679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-04-14 00:44:41.751161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-04-14 00:44:41.752042 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-04-14 00:44:41.752142 | orchestrator | 2025-04-14 00:44:41.752535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:41.753212 | orchestrator | Monday 14 April 2025 00:44:41 +0000 (0:00:00.480) 0:00:06.694 ********** 2025-04-14 00:44:41.952998 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:41.953333 | orchestrator | 2025-04-14 00:44:41.953962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:41.954428 | orchestrator | Monday 14 April 2025 00:44:41 +0000 (0:00:00.209) 0:00:06.904 ********** 2025-04-14 00:44:42.165568 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:42.167285 | orchestrator | 2025-04-14 00:44:42.167850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:42.168635 | orchestrator | Monday 14 April 2025 00:44:42 +0000 (0:00:00.212) 0:00:07.117 ********** 2025-04-14 00:44:42.394875 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:42.395780 | orchestrator | 2025-04-14 00:44:42.395827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:42.396211 | orchestrator | Monday 14 April 2025 00:44:42 +0000 (0:00:00.230) 0:00:07.347 ********** 2025-04-14 00:44:42.607325 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:42.608068 | orchestrator | 2025-04-14 00:44:42.608113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:42.608767 | orchestrator | Monday 14 April 2025 00:44:42 +0000 (0:00:00.213) 0:00:07.561 ********** 2025-04-14 00:44:43.212385 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:43.212647 | orchestrator | 2025-04-14 00:44:43.213656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:43.214331 | orchestrator | Monday 14 April 2025 00:44:43 +0000 (0:00:00.603) 0:00:08.164 ********** 2025-04-14 00:44:43.415286 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:43.621348 | orchestrator | 2025-04-14 00:44:43.621462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:43.621481 | orchestrator | Monday 14 April 2025 00:44:43 +0000 (0:00:00.201) 0:00:08.366 ********** 2025-04-14 00:44:43.621514 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:43.622821 | orchestrator | 2025-04-14 00:44:43.623379 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:43.624840 | orchestrator | Monday 14 April 2025 00:44:43 +0000 (0:00:00.208) 0:00:08.574 ********** 2025-04-14 00:44:43.829329 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:43.829982 | orchestrator | 2025-04-14 00:44:43.830350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:43.831571 | orchestrator | Monday 14 April 2025 00:44:43 +0000 (0:00:00.208) 0:00:08.783 ********** 2025-04-14 00:44:44.507887 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-04-14 00:44:44.508976 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-04-14 00:44:44.512162 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-04-14 00:44:44.513159 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-04-14 00:44:44.514638 | orchestrator | 2025-04-14 00:44:44.515387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:44.516236 | orchestrator | Monday 14 April 2025 00:44:44 +0000 (0:00:00.674) 0:00:09.457 ********** 2025-04-14 00:44:44.701558 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:44.703214 | orchestrator | 2025-04-14 00:44:44.703739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:44.704981 | orchestrator | Monday 14 April 2025 00:44:44 +0000 (0:00:00.197) 0:00:09.655 ********** 2025-04-14 00:44:44.900590 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:44.900764 | orchestrator | 2025-04-14 00:44:44.900874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:44.901784 | orchestrator | Monday 14 April 2025 00:44:44 +0000 (0:00:00.198) 0:00:09.853 ********** 2025-04-14 00:44:45.099455 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:45.100734 | orchestrator | 2025-04-14 00:44:45.101374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:44:45.101721 | orchestrator | Monday 14 April 2025 00:44:45 +0000 (0:00:00.199) 0:00:10.053 ********** 2025-04-14 00:44:45.329616 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:45.332135 | orchestrator | 2025-04-14 00:44:45.332385 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-14 00:44:45.333566 | orchestrator | Monday 14 April 2025 00:44:45 +0000 (0:00:00.228) 0:00:10.281 ********** 2025-04-14 00:44:45.476103 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:45.476268 | orchestrator | 2025-04-14 00:44:45.476294 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-14 00:44:45.476534 | orchestrator | Monday 14 April 2025 00:44:45 +0000 (0:00:00.146) 0:00:10.427 ********** 2025-04-14 00:44:45.687243 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '010b5855-d3d9-5348-85e9-2943091c3a59'}}) 2025-04-14 00:44:45.688051 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '47a37963-cc76-524e-bf57-deb935e0a7e9'}}) 2025-04-14 00:44:45.689165 | orchestrator | 2025-04-14 00:44:45.689893 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-14 00:44:45.692101 | orchestrator | Monday 14 April 2025 00:44:45 +0000 (0:00:00.212) 0:00:10.640 ********** 2025-04-14 00:44:47.890748 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'}) 2025-04-14 00:44:47.891274 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'}) 2025-04-14 00:44:47.891331 | orchestrator | 2025-04-14 00:44:47.891702 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-14 00:44:47.891985 | orchestrator | Monday 14 April 2025 00:44:47 +0000 (0:00:02.202) 0:00:12.843 ********** 2025-04-14 00:44:48.063683 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:48.063834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:48.064407 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:48.065251 | orchestrator | 2025-04-14 00:44:48.065557 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-14 00:44:48.066080 | orchestrator | Monday 14 April 2025 00:44:48 +0000 (0:00:00.174) 0:00:13.017 ********** 2025-04-14 00:44:49.503447 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'}) 2025-04-14 00:44:49.505847 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'}) 2025-04-14 00:44:49.506706 | orchestrator | 2025-04-14 00:44:49.506795 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-14 00:44:49.506827 | orchestrator | Monday 14 April 2025 00:44:49 +0000 (0:00:01.436) 0:00:14.454 ********** 2025-04-14 00:44:49.685950 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:49.686295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:49.687131 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:49.687892 | orchestrator | 2025-04-14 00:44:49.690591 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-14 00:44:49.690837 | orchestrator | Monday 14 April 2025 00:44:49 +0000 (0:00:00.184) 0:00:14.638 ********** 2025-04-14 00:44:49.838329 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:49.838878 | orchestrator | 2025-04-14 00:44:49.839899 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-14 00:44:49.842655 | orchestrator | Monday 14 April 2025 00:44:49 +0000 (0:00:00.152) 0:00:14.791 ********** 2025-04-14 00:44:50.025416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:50.026626 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:50.030137 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:50.030577 | orchestrator | 2025-04-14 00:44:50.031552 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-14 00:44:50.032255 | orchestrator | Monday 14 April 2025 00:44:50 +0000 (0:00:00.185) 0:00:14.976 ********** 2025-04-14 00:44:50.180649 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:50.181962 | orchestrator | 2025-04-14 00:44:50.182433 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-14 00:44:50.182501 | orchestrator | Monday 14 April 2025 00:44:50 +0000 (0:00:00.157) 0:00:15.134 ********** 2025-04-14 00:44:50.360685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:50.361528 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:50.362130 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:50.362369 | orchestrator | 2025-04-14 00:44:50.363632 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-14 00:44:50.364018 | orchestrator | Monday 14 April 2025 00:44:50 +0000 (0:00:00.179) 0:00:15.314 ********** 2025-04-14 00:44:50.674896 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:50.675269 | orchestrator | 2025-04-14 00:44:50.676038 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-14 00:44:50.676433 | orchestrator | Monday 14 April 2025 00:44:50 +0000 (0:00:00.314) 0:00:15.629 ********** 2025-04-14 00:44:50.872246 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:50.872783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:50.873226 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:50.875743 | orchestrator | 2025-04-14 00:44:51.024212 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-14 00:44:51.024337 | orchestrator | Monday 14 April 2025 00:44:50 +0000 (0:00:00.195) 0:00:15.825 ********** 2025-04-14 00:44:51.024372 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:44:51.024455 | orchestrator | 2025-04-14 00:44:51.024915 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-14 00:44:51.025658 | orchestrator | Monday 14 April 2025 00:44:51 +0000 (0:00:00.152) 0:00:15.977 ********** 2025-04-14 00:44:51.192725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:51.194387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:51.197397 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:51.361664 | orchestrator | 2025-04-14 00:44:51.361783 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-14 00:44:51.361805 | orchestrator | Monday 14 April 2025 00:44:51 +0000 (0:00:00.169) 0:00:16.146 ********** 2025-04-14 00:44:51.361840 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:51.362678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:51.363390 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:51.364142 | orchestrator | 2025-04-14 00:44:51.366786 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-14 00:44:51.538508 | orchestrator | Monday 14 April 2025 00:44:51 +0000 (0:00:00.169) 0:00:16.315 ********** 2025-04-14 00:44:51.538649 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:51.539384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:51.540360 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:51.540898 | orchestrator | 2025-04-14 00:44:51.543163 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-14 00:44:51.543917 | orchestrator | Monday 14 April 2025 00:44:51 +0000 (0:00:00.175) 0:00:16.491 ********** 2025-04-14 00:44:51.683479 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:51.683828 | orchestrator | 2025-04-14 00:44:51.685696 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-14 00:44:51.686387 | orchestrator | Monday 14 April 2025 00:44:51 +0000 (0:00:00.145) 0:00:16.636 ********** 2025-04-14 00:44:51.840956 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:51.842097 | orchestrator | 2025-04-14 00:44:51.843409 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-14 00:44:51.844773 | orchestrator | Monday 14 April 2025 00:44:51 +0000 (0:00:00.156) 0:00:16.793 ********** 2025-04-14 00:44:51.979553 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:51.982231 | orchestrator | 2025-04-14 00:44:51.984014 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-14 00:44:51.986179 | orchestrator | Monday 14 April 2025 00:44:51 +0000 (0:00:00.138) 0:00:16.931 ********** 2025-04-14 00:44:52.140948 | orchestrator | ok: [testbed-node-3] => { 2025-04-14 00:44:52.142242 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-14 00:44:52.143236 | orchestrator | } 2025-04-14 00:44:52.144126 | orchestrator | 2025-04-14 00:44:52.144504 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-14 00:44:52.145266 | orchestrator | Monday 14 April 2025 00:44:52 +0000 (0:00:00.159) 0:00:17.090 ********** 2025-04-14 00:44:52.295707 | orchestrator | ok: [testbed-node-3] => { 2025-04-14 00:44:52.296178 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-14 00:44:52.296584 | orchestrator | } 2025-04-14 00:44:52.297246 | orchestrator | 2025-04-14 00:44:52.298071 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-14 00:44:52.298206 | orchestrator | Monday 14 April 2025 00:44:52 +0000 (0:00:00.159) 0:00:17.250 ********** 2025-04-14 00:44:52.451185 | orchestrator | ok: [testbed-node-3] => { 2025-04-14 00:44:52.451427 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-14 00:44:52.451463 | orchestrator | } 2025-04-14 00:44:52.452904 | orchestrator | 2025-04-14 00:44:52.453053 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-14 00:44:52.453347 | orchestrator | Monday 14 April 2025 00:44:52 +0000 (0:00:00.153) 0:00:17.403 ********** 2025-04-14 00:44:53.472982 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:44:53.473383 | orchestrator | 2025-04-14 00:44:53.473427 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-14 00:44:53.474486 | orchestrator | Monday 14 April 2025 00:44:53 +0000 (0:00:01.023) 0:00:18.426 ********** 2025-04-14 00:44:53.976071 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:44:53.978964 | orchestrator | 2025-04-14 00:44:54.524854 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-14 00:44:54.524976 | orchestrator | Monday 14 April 2025 00:44:53 +0000 (0:00:00.502) 0:00:18.929 ********** 2025-04-14 00:44:54.525065 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:44:54.526416 | orchestrator | 2025-04-14 00:44:54.526451 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-14 00:44:54.526472 | orchestrator | Monday 14 April 2025 00:44:54 +0000 (0:00:00.542) 0:00:19.471 ********** 2025-04-14 00:44:54.704157 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:44:54.704309 | orchestrator | 2025-04-14 00:44:54.704723 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-14 00:44:54.704753 | orchestrator | Monday 14 April 2025 00:44:54 +0000 (0:00:00.186) 0:00:19.658 ********** 2025-04-14 00:44:54.832612 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:54.833287 | orchestrator | 2025-04-14 00:44:54.833898 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-14 00:44:54.834261 | orchestrator | Monday 14 April 2025 00:44:54 +0000 (0:00:00.128) 0:00:19.786 ********** 2025-04-14 00:44:54.963741 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:54.964244 | orchestrator | 2025-04-14 00:44:54.965664 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-14 00:44:54.968890 | orchestrator | Monday 14 April 2025 00:44:54 +0000 (0:00:00.131) 0:00:19.917 ********** 2025-04-14 00:44:55.125176 | orchestrator | ok: [testbed-node-3] => { 2025-04-14 00:44:55.126687 | orchestrator |  "vgs_report": { 2025-04-14 00:44:55.127665 | orchestrator |  "vg": [] 2025-04-14 00:44:55.128824 | orchestrator |  } 2025-04-14 00:44:55.132916 | orchestrator | } 2025-04-14 00:44:55.133163 | orchestrator | 2025-04-14 00:44:55.133956 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-14 00:44:55.134238 | orchestrator | Monday 14 April 2025 00:44:55 +0000 (0:00:00.159) 0:00:20.076 ********** 2025-04-14 00:44:55.273327 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:55.276137 | orchestrator | 2025-04-14 00:44:55.423723 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-14 00:44:55.423842 | orchestrator | Monday 14 April 2025 00:44:55 +0000 (0:00:00.149) 0:00:20.226 ********** 2025-04-14 00:44:55.423877 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:55.424306 | orchestrator | 2025-04-14 00:44:55.428246 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-14 00:44:55.565609 | orchestrator | Monday 14 April 2025 00:44:55 +0000 (0:00:00.149) 0:00:20.375 ********** 2025-04-14 00:44:55.565769 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:55.566922 | orchestrator | 2025-04-14 00:44:55.571607 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-14 00:44:55.571796 | orchestrator | Monday 14 April 2025 00:44:55 +0000 (0:00:00.143) 0:00:20.519 ********** 2025-04-14 00:44:55.714894 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:55.715520 | orchestrator | 2025-04-14 00:44:55.719757 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-14 00:44:55.720309 | orchestrator | Monday 14 April 2025 00:44:55 +0000 (0:00:00.150) 0:00:20.669 ********** 2025-04-14 00:44:56.065654 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:56.067571 | orchestrator | 2025-04-14 00:44:56.072366 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-14 00:44:56.074319 | orchestrator | Monday 14 April 2025 00:44:56 +0000 (0:00:00.344) 0:00:21.013 ********** 2025-04-14 00:44:56.220380 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:56.220613 | orchestrator | 2025-04-14 00:44:56.221440 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-14 00:44:56.222409 | orchestrator | Monday 14 April 2025 00:44:56 +0000 (0:00:00.160) 0:00:21.173 ********** 2025-04-14 00:44:56.387066 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:56.387662 | orchestrator | 2025-04-14 00:44:56.388028 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-14 00:44:56.388617 | orchestrator | Monday 14 April 2025 00:44:56 +0000 (0:00:00.167) 0:00:21.341 ********** 2025-04-14 00:44:56.530415 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:56.531049 | orchestrator | 2025-04-14 00:44:56.531730 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-14 00:44:56.532394 | orchestrator | Monday 14 April 2025 00:44:56 +0000 (0:00:00.143) 0:00:21.484 ********** 2025-04-14 00:44:56.675457 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:56.676096 | orchestrator | 2025-04-14 00:44:56.677000 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-14 00:44:56.680897 | orchestrator | Monday 14 April 2025 00:44:56 +0000 (0:00:00.145) 0:00:21.629 ********** 2025-04-14 00:44:56.818466 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:56.819164 | orchestrator | 2025-04-14 00:44:56.819592 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-14 00:44:56.823029 | orchestrator | Monday 14 April 2025 00:44:56 +0000 (0:00:00.142) 0:00:21.772 ********** 2025-04-14 00:44:56.989647 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:56.989842 | orchestrator | 2025-04-14 00:44:56.990771 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-14 00:44:56.991439 | orchestrator | Monday 14 April 2025 00:44:56 +0000 (0:00:00.171) 0:00:21.943 ********** 2025-04-14 00:44:57.144844 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:57.145503 | orchestrator | 2025-04-14 00:44:57.146358 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-14 00:44:57.148233 | orchestrator | Monday 14 April 2025 00:44:57 +0000 (0:00:00.153) 0:00:22.096 ********** 2025-04-14 00:44:57.279352 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:57.280222 | orchestrator | 2025-04-14 00:44:57.281396 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-14 00:44:57.282524 | orchestrator | Monday 14 April 2025 00:44:57 +0000 (0:00:00.136) 0:00:22.233 ********** 2025-04-14 00:44:57.426271 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:57.427862 | orchestrator | 2025-04-14 00:44:57.431150 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-14 00:44:57.595676 | orchestrator | Monday 14 April 2025 00:44:57 +0000 (0:00:00.145) 0:00:22.379 ********** 2025-04-14 00:44:57.595807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:57.597517 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:57.597676 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:57.598312 | orchestrator | 2025-04-14 00:44:57.601013 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-14 00:44:57.999479 | orchestrator | Monday 14 April 2025 00:44:57 +0000 (0:00:00.170) 0:00:22.549 ********** 2025-04-14 00:44:57.999646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:58.210501 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:58.210618 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:58.210638 | orchestrator | 2025-04-14 00:44:58.210655 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-14 00:44:58.210670 | orchestrator | Monday 14 April 2025 00:44:57 +0000 (0:00:00.398) 0:00:22.947 ********** 2025-04-14 00:44:58.210700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:58.211099 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:58.212150 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:58.212937 | orchestrator | 2025-04-14 00:44:58.213749 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-14 00:44:58.214163 | orchestrator | Monday 14 April 2025 00:44:58 +0000 (0:00:00.214) 0:00:23.162 ********** 2025-04-14 00:44:58.380453 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:58.384176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:58.386689 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:58.386723 | orchestrator | 2025-04-14 00:44:58.387864 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-14 00:44:58.388961 | orchestrator | Monday 14 April 2025 00:44:58 +0000 (0:00:00.170) 0:00:23.333 ********** 2025-04-14 00:44:58.552221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:58.552919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:58.554607 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:58.555074 | orchestrator | 2025-04-14 00:44:58.556941 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-14 00:44:58.558248 | orchestrator | Monday 14 April 2025 00:44:58 +0000 (0:00:00.171) 0:00:23.505 ********** 2025-04-14 00:44:58.741630 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:58.741833 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:58.743130 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:58.744282 | orchestrator | 2025-04-14 00:44:58.745669 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-14 00:44:58.746600 | orchestrator | Monday 14 April 2025 00:44:58 +0000 (0:00:00.189) 0:00:23.694 ********** 2025-04-14 00:44:58.908619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:58.908859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:58.909317 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:58.910103 | orchestrator | 2025-04-14 00:44:58.910411 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-14 00:44:58.911843 | orchestrator | Monday 14 April 2025 00:44:58 +0000 (0:00:00.168) 0:00:23.863 ********** 2025-04-14 00:44:59.082777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:44:59.083410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:44:59.084225 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:44:59.085751 | orchestrator | 2025-04-14 00:44:59.087533 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-14 00:44:59.087707 | orchestrator | Monday 14 April 2025 00:44:59 +0000 (0:00:00.173) 0:00:24.036 ********** 2025-04-14 00:44:59.631440 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:44:59.632674 | orchestrator | 2025-04-14 00:44:59.633858 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-14 00:44:59.635553 | orchestrator | Monday 14 April 2025 00:44:59 +0000 (0:00:00.547) 0:00:24.584 ********** 2025-04-14 00:45:00.164420 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:45:00.165137 | orchestrator | 2025-04-14 00:45:00.170145 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-14 00:45:00.170309 | orchestrator | Monday 14 April 2025 00:45:00 +0000 (0:00:00.530) 0:00:25.114 ********** 2025-04-14 00:45:00.345330 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:45:00.346398 | orchestrator | 2025-04-14 00:45:00.346859 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-14 00:45:00.351184 | orchestrator | Monday 14 April 2025 00:45:00 +0000 (0:00:00.184) 0:00:25.299 ********** 2025-04-14 00:45:00.557355 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'vg_name': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'}) 2025-04-14 00:45:00.559065 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'vg_name': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'}) 2025-04-14 00:45:00.559180 | orchestrator | 2025-04-14 00:45:00.560153 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-14 00:45:00.561070 | orchestrator | Monday 14 April 2025 00:45:00 +0000 (0:00:00.212) 0:00:25.511 ********** 2025-04-14 00:45:00.951441 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:45:00.951709 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:45:00.953273 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:45:00.954173 | orchestrator | 2025-04-14 00:45:00.957494 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-14 00:45:00.958876 | orchestrator | Monday 14 April 2025 00:45:00 +0000 (0:00:00.394) 0:00:25.905 ********** 2025-04-14 00:45:01.134840 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:45:01.135814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:45:01.138754 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:45:01.140513 | orchestrator | 2025-04-14 00:45:01.141600 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-14 00:45:01.142807 | orchestrator | Monday 14 April 2025 00:45:01 +0000 (0:00:00.183) 0:00:26.088 ********** 2025-04-14 00:45:01.341521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'})  2025-04-14 00:45:01.342399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'})  2025-04-14 00:45:01.342439 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:45:01.342464 | orchestrator | 2025-04-14 00:45:01.345588 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-14 00:45:01.346651 | orchestrator | Monday 14 April 2025 00:45:01 +0000 (0:00:00.204) 0:00:26.293 ********** 2025-04-14 00:45:02.079296 | orchestrator | ok: [testbed-node-3] => { 2025-04-14 00:45:02.080492 | orchestrator |  "lvm_report": { 2025-04-14 00:45:02.086466 | orchestrator |  "lv": [ 2025-04-14 00:45:02.089675 | orchestrator |  { 2025-04-14 00:45:02.092601 | orchestrator |  "lv_name": "osd-block-010b5855-d3d9-5348-85e9-2943091c3a59", 2025-04-14 00:45:02.092838 | orchestrator |  "vg_name": "ceph-010b5855-d3d9-5348-85e9-2943091c3a59" 2025-04-14 00:45:02.094443 | orchestrator |  }, 2025-04-14 00:45:02.096956 | orchestrator |  { 2025-04-14 00:45:02.097590 | orchestrator |  "lv_name": "osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9", 2025-04-14 00:45:02.100778 | orchestrator |  "vg_name": "ceph-47a37963-cc76-524e-bf57-deb935e0a7e9" 2025-04-14 00:45:02.101074 | orchestrator |  } 2025-04-14 00:45:02.101793 | orchestrator |  ], 2025-04-14 00:45:02.101831 | orchestrator |  "pv": [ 2025-04-14 00:45:02.102236 | orchestrator |  { 2025-04-14 00:45:02.102514 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-14 00:45:02.103033 | orchestrator |  "vg_name": "ceph-010b5855-d3d9-5348-85e9-2943091c3a59" 2025-04-14 00:45:02.103508 | orchestrator |  }, 2025-04-14 00:45:02.103703 | orchestrator |  { 2025-04-14 00:45:02.104186 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-14 00:45:02.104855 | orchestrator |  "vg_name": "ceph-47a37963-cc76-524e-bf57-deb935e0a7e9" 2025-04-14 00:45:02.104938 | orchestrator |  } 2025-04-14 00:45:02.105224 | orchestrator |  ] 2025-04-14 00:45:02.105525 | orchestrator |  } 2025-04-14 00:45:02.105882 | orchestrator | } 2025-04-14 00:45:02.106183 | orchestrator | 2025-04-14 00:45:02.106538 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-14 00:45:02.107004 | orchestrator | 2025-04-14 00:45:02.107193 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-14 00:45:02.107485 | orchestrator | Monday 14 April 2025 00:45:02 +0000 (0:00:00.733) 0:00:27.026 ********** 2025-04-14 00:45:02.688351 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-04-14 00:45:02.688529 | orchestrator | 2025-04-14 00:45:02.689132 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-14 00:45:02.689687 | orchestrator | Monday 14 April 2025 00:45:02 +0000 (0:00:00.614) 0:00:27.641 ********** 2025-04-14 00:45:02.921130 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:02.922532 | orchestrator | 2025-04-14 00:45:02.923573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:02.927086 | orchestrator | Monday 14 April 2025 00:45:02 +0000 (0:00:00.233) 0:00:27.875 ********** 2025-04-14 00:45:03.474124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-04-14 00:45:03.476257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-04-14 00:45:03.477068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-04-14 00:45:03.477635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-04-14 00:45:03.481052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-04-14 00:45:03.482340 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-04-14 00:45:03.482372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-04-14 00:45:03.482389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-04-14 00:45:03.483755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-04-14 00:45:03.484513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-04-14 00:45:03.485282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-04-14 00:45:03.486230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-04-14 00:45:03.487100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-04-14 00:45:03.488064 | orchestrator | 2025-04-14 00:45:03.489306 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:03.490055 | orchestrator | Monday 14 April 2025 00:45:03 +0000 (0:00:00.550) 0:00:28.425 ********** 2025-04-14 00:45:03.684761 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:03.684929 | orchestrator | 2025-04-14 00:45:03.685940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:03.687781 | orchestrator | Monday 14 April 2025 00:45:03 +0000 (0:00:00.211) 0:00:28.636 ********** 2025-04-14 00:45:03.896543 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:03.896842 | orchestrator | 2025-04-14 00:45:03.896866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:03.901164 | orchestrator | Monday 14 April 2025 00:45:03 +0000 (0:00:00.211) 0:00:28.848 ********** 2025-04-14 00:45:04.085191 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:04.087203 | orchestrator | 2025-04-14 00:45:04.089626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:04.096299 | orchestrator | Monday 14 April 2025 00:45:04 +0000 (0:00:00.188) 0:00:29.036 ********** 2025-04-14 00:45:04.334858 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:04.335119 | orchestrator | 2025-04-14 00:45:04.335607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:04.338132 | orchestrator | Monday 14 April 2025 00:45:04 +0000 (0:00:00.250) 0:00:29.287 ********** 2025-04-14 00:45:04.535653 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:04.536273 | orchestrator | 2025-04-14 00:45:04.537037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:04.537792 | orchestrator | Monday 14 April 2025 00:45:04 +0000 (0:00:00.202) 0:00:29.489 ********** 2025-04-14 00:45:04.759962 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:04.760919 | orchestrator | 2025-04-14 00:45:04.763451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:05.142185 | orchestrator | Monday 14 April 2025 00:45:04 +0000 (0:00:00.222) 0:00:29.712 ********** 2025-04-14 00:45:05.142318 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:05.142395 | orchestrator | 2025-04-14 00:45:05.142895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:05.365261 | orchestrator | Monday 14 April 2025 00:45:05 +0000 (0:00:00.384) 0:00:30.096 ********** 2025-04-14 00:45:05.365420 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:05.365503 | orchestrator | 2025-04-14 00:45:05.366548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:05.367213 | orchestrator | Monday 14 April 2025 00:45:05 +0000 (0:00:00.218) 0:00:30.315 ********** 2025-04-14 00:45:05.811837 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12) 2025-04-14 00:45:05.813262 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12) 2025-04-14 00:45:05.813508 | orchestrator | 2025-04-14 00:45:05.814637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:05.815387 | orchestrator | Monday 14 April 2025 00:45:05 +0000 (0:00:00.450) 0:00:30.766 ********** 2025-04-14 00:45:06.250404 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_676c1686-7068-4aa0-a437-1ca2ad657cc9) 2025-04-14 00:45:06.251543 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_676c1686-7068-4aa0-a437-1ca2ad657cc9) 2025-04-14 00:45:06.254351 | orchestrator | 2025-04-14 00:45:06.254865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:06.257064 | orchestrator | Monday 14 April 2025 00:45:06 +0000 (0:00:00.437) 0:00:31.204 ********** 2025-04-14 00:45:06.728435 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_64225693-fc38-404b-a874-78411dc3466d) 2025-04-14 00:45:06.729847 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_64225693-fc38-404b-a874-78411dc3466d) 2025-04-14 00:45:07.180896 | orchestrator | 2025-04-14 00:45:07.181137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:07.181173 | orchestrator | Monday 14 April 2025 00:45:06 +0000 (0:00:00.478) 0:00:31.682 ********** 2025-04-14 00:45:07.181217 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bda45bef-0c7e-4642-a586-327a75973f57) 2025-04-14 00:45:07.181538 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bda45bef-0c7e-4642-a586-327a75973f57) 2025-04-14 00:45:07.182822 | orchestrator | 2025-04-14 00:45:07.183656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:07.183909 | orchestrator | Monday 14 April 2025 00:45:07 +0000 (0:00:00.453) 0:00:32.135 ********** 2025-04-14 00:45:07.518302 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-14 00:45:07.518751 | orchestrator | 2025-04-14 00:45:07.519393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:07.519868 | orchestrator | Monday 14 April 2025 00:45:07 +0000 (0:00:00.334) 0:00:32.470 ********** 2025-04-14 00:45:08.051360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-04-14 00:45:08.052877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-04-14 00:45:08.053676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-04-14 00:45:08.054803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-04-14 00:45:08.056548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-04-14 00:45:08.057138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-04-14 00:45:08.058592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-04-14 00:45:08.059050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-04-14 00:45:08.060071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-04-14 00:45:08.060649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-04-14 00:45:08.061116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-04-14 00:45:08.061502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-04-14 00:45:08.062135 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-04-14 00:45:08.064143 | orchestrator | 2025-04-14 00:45:08.064247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:08.064898 | orchestrator | Monday 14 April 2025 00:45:08 +0000 (0:00:00.534) 0:00:33.004 ********** 2025-04-14 00:45:08.258722 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:08.259657 | orchestrator | 2025-04-14 00:45:08.260300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:08.262838 | orchestrator | Monday 14 April 2025 00:45:08 +0000 (0:00:00.207) 0:00:33.212 ********** 2025-04-14 00:45:08.463670 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:08.464105 | orchestrator | 2025-04-14 00:45:08.464901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:08.465577 | orchestrator | Monday 14 April 2025 00:45:08 +0000 (0:00:00.203) 0:00:33.415 ********** 2025-04-14 00:45:09.012579 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:09.012955 | orchestrator | 2025-04-14 00:45:09.014289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:09.015133 | orchestrator | Monday 14 April 2025 00:45:09 +0000 (0:00:00.551) 0:00:33.966 ********** 2025-04-14 00:45:09.297721 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:09.298355 | orchestrator | 2025-04-14 00:45:09.301413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:09.302738 | orchestrator | Monday 14 April 2025 00:45:09 +0000 (0:00:00.283) 0:00:34.249 ********** 2025-04-14 00:45:09.498610 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:09.499021 | orchestrator | 2025-04-14 00:45:09.500877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:09.501779 | orchestrator | Monday 14 April 2025 00:45:09 +0000 (0:00:00.202) 0:00:34.452 ********** 2025-04-14 00:45:09.714840 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:09.715376 | orchestrator | 2025-04-14 00:45:09.715397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:09.715409 | orchestrator | Monday 14 April 2025 00:45:09 +0000 (0:00:00.216) 0:00:34.668 ********** 2025-04-14 00:45:09.921166 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:09.922218 | orchestrator | 2025-04-14 00:45:09.923057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:09.926696 | orchestrator | Monday 14 April 2025 00:45:09 +0000 (0:00:00.204) 0:00:34.873 ********** 2025-04-14 00:45:10.134779 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:10.135304 | orchestrator | 2025-04-14 00:45:10.136711 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:10.137430 | orchestrator | Monday 14 April 2025 00:45:10 +0000 (0:00:00.214) 0:00:35.088 ********** 2025-04-14 00:45:10.803350 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-04-14 00:45:10.805252 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-04-14 00:45:10.807185 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-04-14 00:45:10.807461 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-04-14 00:45:10.807945 | orchestrator | 2025-04-14 00:45:10.808535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:10.809187 | orchestrator | Monday 14 April 2025 00:45:10 +0000 (0:00:00.667) 0:00:35.755 ********** 2025-04-14 00:45:11.025831 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:11.026842 | orchestrator | 2025-04-14 00:45:11.027755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:11.028859 | orchestrator | Monday 14 April 2025 00:45:11 +0000 (0:00:00.224) 0:00:35.979 ********** 2025-04-14 00:45:11.213687 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:11.214186 | orchestrator | 2025-04-14 00:45:11.215693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:11.218159 | orchestrator | Monday 14 April 2025 00:45:11 +0000 (0:00:00.187) 0:00:36.166 ********** 2025-04-14 00:45:11.418205 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:11.418367 | orchestrator | 2025-04-14 00:45:11.419810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:11.421048 | orchestrator | Monday 14 April 2025 00:45:11 +0000 (0:00:00.203) 0:00:36.370 ********** 2025-04-14 00:45:12.063466 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:12.065073 | orchestrator | 2025-04-14 00:45:12.065124 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-14 00:45:12.066534 | orchestrator | Monday 14 April 2025 00:45:12 +0000 (0:00:00.644) 0:00:37.015 ********** 2025-04-14 00:45:12.219078 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:12.219252 | orchestrator | 2025-04-14 00:45:12.220257 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-14 00:45:12.221723 | orchestrator | Monday 14 April 2025 00:45:12 +0000 (0:00:00.156) 0:00:37.171 ********** 2025-04-14 00:45:12.425410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '89320cc7-f853-5314-9a76-744a2d019bd6'}}) 2025-04-14 00:45:12.425748 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}}) 2025-04-14 00:45:12.426857 | orchestrator | 2025-04-14 00:45:12.429046 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-14 00:45:14.219877 | orchestrator | Monday 14 April 2025 00:45:12 +0000 (0:00:00.206) 0:00:37.377 ********** 2025-04-14 00:45:14.220042 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'}) 2025-04-14 00:45:14.221547 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}) 2025-04-14 00:45:14.221583 | orchestrator | 2025-04-14 00:45:14.224020 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-14 00:45:14.404367 | orchestrator | Monday 14 April 2025 00:45:14 +0000 (0:00:01.794) 0:00:39.172 ********** 2025-04-14 00:45:14.404528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:14.405240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:14.406178 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:14.408589 | orchestrator | 2025-04-14 00:45:14.411154 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-14 00:45:15.716701 | orchestrator | Monday 14 April 2025 00:45:14 +0000 (0:00:00.185) 0:00:39.358 ********** 2025-04-14 00:45:15.716829 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'}) 2025-04-14 00:45:15.721474 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}) 2025-04-14 00:45:15.721811 | orchestrator | 2025-04-14 00:45:15.722623 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-14 00:45:15.723092 | orchestrator | Monday 14 April 2025 00:45:15 +0000 (0:00:01.308) 0:00:40.666 ********** 2025-04-14 00:45:15.894329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:15.894609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:15.895930 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:15.896368 | orchestrator | 2025-04-14 00:45:15.896778 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-14 00:45:15.897108 | orchestrator | Monday 14 April 2025 00:45:15 +0000 (0:00:00.181) 0:00:40.847 ********** 2025-04-14 00:45:16.043670 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:16.043885 | orchestrator | 2025-04-14 00:45:16.045168 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-14 00:45:16.046287 | orchestrator | Monday 14 April 2025 00:45:16 +0000 (0:00:00.149) 0:00:40.996 ********** 2025-04-14 00:45:16.237182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:16.237769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:16.237815 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:16.239041 | orchestrator | 2025-04-14 00:45:16.239809 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-14 00:45:16.240427 | orchestrator | Monday 14 April 2025 00:45:16 +0000 (0:00:00.191) 0:00:41.188 ********** 2025-04-14 00:45:16.577620 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:16.578516 | orchestrator | 2025-04-14 00:45:16.578580 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-14 00:45:16.580814 | orchestrator | Monday 14 April 2025 00:45:16 +0000 (0:00:00.341) 0:00:41.529 ********** 2025-04-14 00:45:16.784446 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:16.784687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:16.786780 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:16.787891 | orchestrator | 2025-04-14 00:45:16.790981 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-14 00:45:16.792142 | orchestrator | Monday 14 April 2025 00:45:16 +0000 (0:00:00.207) 0:00:41.737 ********** 2025-04-14 00:45:16.942423 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:16.943706 | orchestrator | 2025-04-14 00:45:16.943757 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-14 00:45:16.944352 | orchestrator | Monday 14 April 2025 00:45:16 +0000 (0:00:00.158) 0:00:41.896 ********** 2025-04-14 00:45:17.117015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:17.118693 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:17.257403 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:17.257533 | orchestrator | 2025-04-14 00:45:17.257553 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-14 00:45:17.257570 | orchestrator | Monday 14 April 2025 00:45:17 +0000 (0:00:00.175) 0:00:42.071 ********** 2025-04-14 00:45:17.257601 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:17.257928 | orchestrator | 2025-04-14 00:45:17.260847 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-14 00:45:17.482878 | orchestrator | Monday 14 April 2025 00:45:17 +0000 (0:00:00.139) 0:00:42.210 ********** 2025-04-14 00:45:17.483074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:17.484013 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:17.484065 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:17.485292 | orchestrator | 2025-04-14 00:45:17.485901 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-14 00:45:17.486888 | orchestrator | Monday 14 April 2025 00:45:17 +0000 (0:00:00.223) 0:00:42.434 ********** 2025-04-14 00:45:17.657217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:17.658335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:17.659413 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:17.660747 | orchestrator | 2025-04-14 00:45:17.661973 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-14 00:45:17.663843 | orchestrator | Monday 14 April 2025 00:45:17 +0000 (0:00:00.175) 0:00:42.610 ********** 2025-04-14 00:45:17.857734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:17.857915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:17.858711 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:17.861230 | orchestrator | 2025-04-14 00:45:17.862144 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-14 00:45:17.862820 | orchestrator | Monday 14 April 2025 00:45:17 +0000 (0:00:00.199) 0:00:42.810 ********** 2025-04-14 00:45:18.012277 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:18.013152 | orchestrator | 2025-04-14 00:45:18.013745 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-14 00:45:18.014365 | orchestrator | Monday 14 April 2025 00:45:18 +0000 (0:00:00.156) 0:00:42.966 ********** 2025-04-14 00:45:18.173674 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:18.173871 | orchestrator | 2025-04-14 00:45:18.175018 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-14 00:45:18.175179 | orchestrator | Monday 14 April 2025 00:45:18 +0000 (0:00:00.160) 0:00:43.127 ********** 2025-04-14 00:45:18.321654 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:18.322898 | orchestrator | 2025-04-14 00:45:18.324144 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-14 00:45:18.325434 | orchestrator | Monday 14 April 2025 00:45:18 +0000 (0:00:00.147) 0:00:43.274 ********** 2025-04-14 00:45:18.469605 | orchestrator | ok: [testbed-node-4] => { 2025-04-14 00:45:18.471596 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-14 00:45:18.473541 | orchestrator | } 2025-04-14 00:45:18.474553 | orchestrator | 2025-04-14 00:45:18.475938 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-14 00:45:18.476407 | orchestrator | Monday 14 April 2025 00:45:18 +0000 (0:00:00.148) 0:00:43.423 ********** 2025-04-14 00:45:18.837660 | orchestrator | ok: [testbed-node-4] => { 2025-04-14 00:45:18.838750 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-14 00:45:18.840256 | orchestrator | } 2025-04-14 00:45:18.841389 | orchestrator | 2025-04-14 00:45:18.841891 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-14 00:45:18.843074 | orchestrator | Monday 14 April 2025 00:45:18 +0000 (0:00:00.368) 0:00:43.791 ********** 2025-04-14 00:45:18.984570 | orchestrator | ok: [testbed-node-4] => { 2025-04-14 00:45:18.986336 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-14 00:45:18.987131 | orchestrator | } 2025-04-14 00:45:18.988591 | orchestrator | 2025-04-14 00:45:18.989774 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-14 00:45:18.990423 | orchestrator | Monday 14 April 2025 00:45:18 +0000 (0:00:00.147) 0:00:43.938 ********** 2025-04-14 00:45:19.472221 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:19.473186 | orchestrator | 2025-04-14 00:45:19.474078 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-14 00:45:19.474119 | orchestrator | Monday 14 April 2025 00:45:19 +0000 (0:00:00.487) 0:00:44.425 ********** 2025-04-14 00:45:19.976466 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:19.976736 | orchestrator | 2025-04-14 00:45:19.976903 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-14 00:45:19.977266 | orchestrator | Monday 14 April 2025 00:45:19 +0000 (0:00:00.502) 0:00:44.928 ********** 2025-04-14 00:45:20.486342 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:20.487154 | orchestrator | 2025-04-14 00:45:20.487286 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-14 00:45:20.487586 | orchestrator | Monday 14 April 2025 00:45:20 +0000 (0:00:00.511) 0:00:45.439 ********** 2025-04-14 00:45:20.645575 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:20.646224 | orchestrator | 2025-04-14 00:45:20.646859 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-14 00:45:20.646891 | orchestrator | Monday 14 April 2025 00:45:20 +0000 (0:00:00.155) 0:00:45.595 ********** 2025-04-14 00:45:20.779435 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:20.780128 | orchestrator | 2025-04-14 00:45:20.780774 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-14 00:45:20.781322 | orchestrator | Monday 14 April 2025 00:45:20 +0000 (0:00:00.138) 0:00:45.733 ********** 2025-04-14 00:45:20.891331 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:20.891708 | orchestrator | 2025-04-14 00:45:20.893114 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-14 00:45:20.894466 | orchestrator | Monday 14 April 2025 00:45:20 +0000 (0:00:00.110) 0:00:45.844 ********** 2025-04-14 00:45:21.042477 | orchestrator | ok: [testbed-node-4] => { 2025-04-14 00:45:21.043409 | orchestrator |  "vgs_report": { 2025-04-14 00:45:21.046189 | orchestrator |  "vg": [] 2025-04-14 00:45:21.047516 | orchestrator |  } 2025-04-14 00:45:21.047547 | orchestrator | } 2025-04-14 00:45:21.048384 | orchestrator | 2025-04-14 00:45:21.049427 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-14 00:45:21.051879 | orchestrator | Monday 14 April 2025 00:45:21 +0000 (0:00:00.150) 0:00:45.995 ********** 2025-04-14 00:45:21.197737 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:21.198863 | orchestrator | 2025-04-14 00:45:21.199640 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-14 00:45:21.200522 | orchestrator | Monday 14 April 2025 00:45:21 +0000 (0:00:00.156) 0:00:46.151 ********** 2025-04-14 00:45:21.559791 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:21.563807 | orchestrator | 2025-04-14 00:45:21.565385 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-14 00:45:21.709337 | orchestrator | Monday 14 April 2025 00:45:21 +0000 (0:00:00.362) 0:00:46.513 ********** 2025-04-14 00:45:21.709528 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:21.711016 | orchestrator | 2025-04-14 00:45:21.711921 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-14 00:45:21.713748 | orchestrator | Monday 14 April 2025 00:45:21 +0000 (0:00:00.149) 0:00:46.662 ********** 2025-04-14 00:45:21.846560 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:21.847297 | orchestrator | 2025-04-14 00:45:21.848320 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-14 00:45:21.849273 | orchestrator | Monday 14 April 2025 00:45:21 +0000 (0:00:00.137) 0:00:46.800 ********** 2025-04-14 00:45:21.993415 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:21.993917 | orchestrator | 2025-04-14 00:45:21.994809 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-14 00:45:21.996074 | orchestrator | Monday 14 April 2025 00:45:21 +0000 (0:00:00.146) 0:00:46.946 ********** 2025-04-14 00:45:22.154291 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:22.155010 | orchestrator | 2025-04-14 00:45:22.155053 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-14 00:45:22.155424 | orchestrator | Monday 14 April 2025 00:45:22 +0000 (0:00:00.160) 0:00:47.107 ********** 2025-04-14 00:45:22.301023 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:22.302223 | orchestrator | 2025-04-14 00:45:22.303042 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-14 00:45:22.306738 | orchestrator | Monday 14 April 2025 00:45:22 +0000 (0:00:00.147) 0:00:47.255 ********** 2025-04-14 00:45:22.456632 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:22.457170 | orchestrator | 2025-04-14 00:45:22.457195 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-14 00:45:22.457441 | orchestrator | Monday 14 April 2025 00:45:22 +0000 (0:00:00.153) 0:00:47.408 ********** 2025-04-14 00:45:22.605521 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:22.605707 | orchestrator | 2025-04-14 00:45:22.605738 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-14 00:45:22.607298 | orchestrator | Monday 14 April 2025 00:45:22 +0000 (0:00:00.145) 0:00:47.554 ********** 2025-04-14 00:45:22.734330 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:22.734521 | orchestrator | 2025-04-14 00:45:22.735265 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-14 00:45:22.736030 | orchestrator | Monday 14 April 2025 00:45:22 +0000 (0:00:00.133) 0:00:47.688 ********** 2025-04-14 00:45:22.888694 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:22.888993 | orchestrator | 2025-04-14 00:45:22.891891 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-14 00:45:22.892198 | orchestrator | Monday 14 April 2025 00:45:22 +0000 (0:00:00.152) 0:00:47.841 ********** 2025-04-14 00:45:23.029399 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:23.029808 | orchestrator | 2025-04-14 00:45:23.030768 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-14 00:45:23.031350 | orchestrator | Monday 14 April 2025 00:45:23 +0000 (0:00:00.141) 0:00:47.983 ********** 2025-04-14 00:45:23.172919 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:23.173685 | orchestrator | 2025-04-14 00:45:23.174143 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-14 00:45:23.174922 | orchestrator | Monday 14 April 2025 00:45:23 +0000 (0:00:00.141) 0:00:48.124 ********** 2025-04-14 00:45:23.565894 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:23.567299 | orchestrator | 2025-04-14 00:45:23.567392 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-14 00:45:23.568038 | orchestrator | Monday 14 April 2025 00:45:23 +0000 (0:00:00.394) 0:00:48.519 ********** 2025-04-14 00:45:23.742598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:23.742776 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:23.743070 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:23.745484 | orchestrator | 2025-04-14 00:45:23.746006 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-14 00:45:23.746103 | orchestrator | Monday 14 April 2025 00:45:23 +0000 (0:00:00.175) 0:00:48.694 ********** 2025-04-14 00:45:23.914169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:23.915326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:23.916104 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:23.917038 | orchestrator | 2025-04-14 00:45:23.917747 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-14 00:45:23.919174 | orchestrator | Monday 14 April 2025 00:45:23 +0000 (0:00:00.172) 0:00:48.866 ********** 2025-04-14 00:45:24.100086 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:24.100335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:24.102240 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:24.103581 | orchestrator | 2025-04-14 00:45:24.104124 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-14 00:45:24.104552 | orchestrator | Monday 14 April 2025 00:45:24 +0000 (0:00:00.184) 0:00:49.051 ********** 2025-04-14 00:45:24.288816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:24.289324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:24.289375 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:24.290138 | orchestrator | 2025-04-14 00:45:24.291752 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-14 00:45:24.292349 | orchestrator | Monday 14 April 2025 00:45:24 +0000 (0:00:00.190) 0:00:49.241 ********** 2025-04-14 00:45:24.465135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:24.465338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:24.467058 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:24.467125 | orchestrator | 2025-04-14 00:45:24.467163 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-14 00:45:24.469697 | orchestrator | Monday 14 April 2025 00:45:24 +0000 (0:00:00.176) 0:00:49.418 ********** 2025-04-14 00:45:24.633682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:24.635487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:24.638133 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:24.638669 | orchestrator | 2025-04-14 00:45:24.638701 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-14 00:45:24.639323 | orchestrator | Monday 14 April 2025 00:45:24 +0000 (0:00:00.168) 0:00:49.586 ********** 2025-04-14 00:45:24.801605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:24.801785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:24.801869 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:24.802624 | orchestrator | 2025-04-14 00:45:24.803295 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-14 00:45:24.803728 | orchestrator | Monday 14 April 2025 00:45:24 +0000 (0:00:00.168) 0:00:49.755 ********** 2025-04-14 00:45:24.980842 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:24.981771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:24.983158 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:24.983658 | orchestrator | 2025-04-14 00:45:24.984679 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-14 00:45:24.985233 | orchestrator | Monday 14 April 2025 00:45:24 +0000 (0:00:00.179) 0:00:49.934 ********** 2025-04-14 00:45:25.504432 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:25.504928 | orchestrator | 2025-04-14 00:45:25.505538 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-14 00:45:25.506102 | orchestrator | Monday 14 April 2025 00:45:25 +0000 (0:00:00.523) 0:00:50.458 ********** 2025-04-14 00:45:26.015598 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:26.369792 | orchestrator | 2025-04-14 00:45:26.371016 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-14 00:45:26.371084 | orchestrator | Monday 14 April 2025 00:45:26 +0000 (0:00:00.509) 0:00:50.968 ********** 2025-04-14 00:45:26.371166 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:45:26.371247 | orchestrator | 2025-04-14 00:45:26.371860 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-14 00:45:26.372384 | orchestrator | Monday 14 April 2025 00:45:26 +0000 (0:00:00.354) 0:00:51.322 ********** 2025-04-14 00:45:26.572548 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'vg_name': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'}) 2025-04-14 00:45:26.573202 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'vg_name': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}) 2025-04-14 00:45:26.573569 | orchestrator | 2025-04-14 00:45:26.574237 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-14 00:45:26.574882 | orchestrator | Monday 14 April 2025 00:45:26 +0000 (0:00:00.203) 0:00:51.526 ********** 2025-04-14 00:45:26.749193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:26.750124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:26.751104 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:26.752149 | orchestrator | 2025-04-14 00:45:26.753051 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-14 00:45:26.753493 | orchestrator | Monday 14 April 2025 00:45:26 +0000 (0:00:00.174) 0:00:51.701 ********** 2025-04-14 00:45:26.931422 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:26.932092 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:26.933183 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:26.934541 | orchestrator | 2025-04-14 00:45:26.935707 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-14 00:45:26.936185 | orchestrator | Monday 14 April 2025 00:45:26 +0000 (0:00:00.183) 0:00:51.884 ********** 2025-04-14 00:45:27.120138 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'})  2025-04-14 00:45:27.120334 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'})  2025-04-14 00:45:27.120387 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:45:27.120413 | orchestrator | 2025-04-14 00:45:27.120738 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-14 00:45:27.121117 | orchestrator | Monday 14 April 2025 00:45:27 +0000 (0:00:00.186) 0:00:52.071 ********** 2025-04-14 00:45:28.009307 | orchestrator | ok: [testbed-node-4] => { 2025-04-14 00:45:28.009915 | orchestrator |  "lvm_report": { 2025-04-14 00:45:28.010180 | orchestrator |  "lv": [ 2025-04-14 00:45:28.010220 | orchestrator |  { 2025-04-14 00:45:28.010902 | orchestrator |  "lv_name": "osd-block-89320cc7-f853-5314-9a76-744a2d019bd6", 2025-04-14 00:45:28.011287 | orchestrator |  "vg_name": "ceph-89320cc7-f853-5314-9a76-744a2d019bd6" 2025-04-14 00:45:28.013913 | orchestrator |  }, 2025-04-14 00:45:28.014758 | orchestrator |  { 2025-04-14 00:45:28.014974 | orchestrator |  "lv_name": "osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe", 2025-04-14 00:45:28.015263 | orchestrator |  "vg_name": "ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe" 2025-04-14 00:45:28.016332 | orchestrator |  } 2025-04-14 00:45:28.016583 | orchestrator |  ], 2025-04-14 00:45:28.017154 | orchestrator |  "pv": [ 2025-04-14 00:45:28.017841 | orchestrator |  { 2025-04-14 00:45:28.018081 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-14 00:45:28.018788 | orchestrator |  "vg_name": "ceph-89320cc7-f853-5314-9a76-744a2d019bd6" 2025-04-14 00:45:28.019153 | orchestrator |  }, 2025-04-14 00:45:28.019524 | orchestrator |  { 2025-04-14 00:45:28.021402 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-14 00:45:28.021512 | orchestrator |  "vg_name": "ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe" 2025-04-14 00:45:28.021548 | orchestrator |  } 2025-04-14 00:45:28.021564 | orchestrator |  ] 2025-04-14 00:45:28.021583 | orchestrator |  } 2025-04-14 00:45:28.021913 | orchestrator | } 2025-04-14 00:45:28.022208 | orchestrator | 2025-04-14 00:45:28.023156 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-04-14 00:45:28.024805 | orchestrator | 2025-04-14 00:45:28.026243 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-04-14 00:45:28.027375 | orchestrator | Monday 14 April 2025 00:45:28 +0000 (0:00:00.890) 0:00:52.962 ********** 2025-04-14 00:45:28.298646 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-04-14 00:45:28.299076 | orchestrator | 2025-04-14 00:45:28.300117 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-04-14 00:45:28.301102 | orchestrator | Monday 14 April 2025 00:45:28 +0000 (0:00:00.288) 0:00:53.250 ********** 2025-04-14 00:45:28.536090 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:28.536264 | orchestrator | 2025-04-14 00:45:28.536855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:28.537170 | orchestrator | Monday 14 April 2025 00:45:28 +0000 (0:00:00.239) 0:00:53.489 ********** 2025-04-14 00:45:29.024467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-04-14 00:45:29.025302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-04-14 00:45:29.025349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-04-14 00:45:29.026369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-04-14 00:45:29.028016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-04-14 00:45:29.029299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-04-14 00:45:29.029789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-04-14 00:45:29.031106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-04-14 00:45:29.032062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-04-14 00:45:29.032838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-04-14 00:45:29.033623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-04-14 00:45:29.034470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-04-14 00:45:29.035735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-04-14 00:45:29.036605 | orchestrator | 2025-04-14 00:45:29.038136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:29.038545 | orchestrator | Monday 14 April 2025 00:45:29 +0000 (0:00:00.483) 0:00:53.973 ********** 2025-04-14 00:45:29.226832 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:29.227122 | orchestrator | 2025-04-14 00:45:29.227633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:29.228613 | orchestrator | Monday 14 April 2025 00:45:29 +0000 (0:00:00.205) 0:00:54.178 ********** 2025-04-14 00:45:29.474749 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:29.475136 | orchestrator | 2025-04-14 00:45:29.477175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:29.685424 | orchestrator | Monday 14 April 2025 00:45:29 +0000 (0:00:00.247) 0:00:54.426 ********** 2025-04-14 00:45:29.685629 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:29.686148 | orchestrator | 2025-04-14 00:45:29.688600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:29.689862 | orchestrator | Monday 14 April 2025 00:45:29 +0000 (0:00:00.211) 0:00:54.638 ********** 2025-04-14 00:45:29.896412 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:29.896577 | orchestrator | 2025-04-14 00:45:29.900128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:30.523713 | orchestrator | Monday 14 April 2025 00:45:29 +0000 (0:00:00.210) 0:00:54.849 ********** 2025-04-14 00:45:30.523853 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:30.524599 | orchestrator | 2025-04-14 00:45:30.525228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:30.525880 | orchestrator | Monday 14 April 2025 00:45:30 +0000 (0:00:00.628) 0:00:55.477 ********** 2025-04-14 00:45:30.745251 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:30.745791 | orchestrator | 2025-04-14 00:45:30.747380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:30.749507 | orchestrator | Monday 14 April 2025 00:45:30 +0000 (0:00:00.221) 0:00:55.698 ********** 2025-04-14 00:45:30.954349 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:30.954602 | orchestrator | 2025-04-14 00:45:30.955129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:30.955827 | orchestrator | Monday 14 April 2025 00:45:30 +0000 (0:00:00.209) 0:00:55.908 ********** 2025-04-14 00:45:31.149533 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:31.150310 | orchestrator | 2025-04-14 00:45:31.151382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:31.151817 | orchestrator | Monday 14 April 2025 00:45:31 +0000 (0:00:00.195) 0:00:56.104 ********** 2025-04-14 00:45:31.638274 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2) 2025-04-14 00:45:31.639265 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2) 2025-04-14 00:45:31.639312 | orchestrator | 2025-04-14 00:45:31.640084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:31.641155 | orchestrator | Monday 14 April 2025 00:45:31 +0000 (0:00:00.486) 0:00:56.590 ********** 2025-04-14 00:45:32.099124 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4f96d1f1-65aa-443a-b2b5-a30371495496) 2025-04-14 00:45:32.099307 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4f96d1f1-65aa-443a-b2b5-a30371495496) 2025-04-14 00:45:32.100188 | orchestrator | 2025-04-14 00:45:32.101706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:32.101979 | orchestrator | Monday 14 April 2025 00:45:32 +0000 (0:00:00.459) 0:00:57.050 ********** 2025-04-14 00:45:32.576456 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3) 2025-04-14 00:45:32.576636 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3) 2025-04-14 00:45:32.576986 | orchestrator | 2025-04-14 00:45:32.577235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:32.577958 | orchestrator | Monday 14 April 2025 00:45:32 +0000 (0:00:00.480) 0:00:57.530 ********** 2025-04-14 00:45:33.016542 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_03a3c0ae-ae5b-4103-947a-830f0553055f) 2025-04-14 00:45:33.017274 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_03a3c0ae-ae5b-4103-947a-830f0553055f) 2025-04-14 00:45:33.017791 | orchestrator | 2025-04-14 00:45:33.018639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-04-14 00:45:33.019107 | orchestrator | Monday 14 April 2025 00:45:33 +0000 (0:00:00.438) 0:00:57.969 ********** 2025-04-14 00:45:33.370338 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-04-14 00:45:33.370767 | orchestrator | 2025-04-14 00:45:33.371728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:33.372664 | orchestrator | Monday 14 April 2025 00:45:33 +0000 (0:00:00.354) 0:00:58.323 ********** 2025-04-14 00:45:34.068997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-04-14 00:45:34.069176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-04-14 00:45:34.070139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-04-14 00:45:34.070817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-04-14 00:45:34.071559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-04-14 00:45:34.071965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-04-14 00:45:34.072715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-04-14 00:45:34.073570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-04-14 00:45:34.073846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-04-14 00:45:34.074171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-04-14 00:45:34.074665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-04-14 00:45:34.075012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-04-14 00:45:34.075388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-04-14 00:45:34.075798 | orchestrator | 2025-04-14 00:45:34.076128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:34.076519 | orchestrator | Monday 14 April 2025 00:45:34 +0000 (0:00:00.697) 0:00:59.021 ********** 2025-04-14 00:45:34.280767 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:34.281312 | orchestrator | 2025-04-14 00:45:34.283976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:34.284465 | orchestrator | Monday 14 April 2025 00:45:34 +0000 (0:00:00.211) 0:00:59.232 ********** 2025-04-14 00:45:34.486900 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:34.488519 | orchestrator | 2025-04-14 00:45:34.489625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:34.490891 | orchestrator | Monday 14 April 2025 00:45:34 +0000 (0:00:00.206) 0:00:59.439 ********** 2025-04-14 00:45:34.699119 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:34.699314 | orchestrator | 2025-04-14 00:45:34.699358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:34.701516 | orchestrator | Monday 14 April 2025 00:45:34 +0000 (0:00:00.212) 0:00:59.652 ********** 2025-04-14 00:45:34.916980 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:34.917876 | orchestrator | 2025-04-14 00:45:34.917905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:34.917951 | orchestrator | Monday 14 April 2025 00:45:34 +0000 (0:00:00.217) 0:00:59.869 ********** 2025-04-14 00:45:35.116316 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:35.118371 | orchestrator | 2025-04-14 00:45:35.120308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:35.120420 | orchestrator | Monday 14 April 2025 00:45:35 +0000 (0:00:00.199) 0:01:00.069 ********** 2025-04-14 00:45:35.320256 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:35.320990 | orchestrator | 2025-04-14 00:45:35.321287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:35.321668 | orchestrator | Monday 14 April 2025 00:45:35 +0000 (0:00:00.203) 0:01:00.273 ********** 2025-04-14 00:45:35.527443 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:35.528167 | orchestrator | 2025-04-14 00:45:35.529302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:35.530524 | orchestrator | Monday 14 April 2025 00:45:35 +0000 (0:00:00.207) 0:01:00.481 ********** 2025-04-14 00:45:35.733980 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:35.735278 | orchestrator | 2025-04-14 00:45:35.735481 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:35.736242 | orchestrator | Monday 14 April 2025 00:45:35 +0000 (0:00:00.205) 0:01:00.687 ********** 2025-04-14 00:45:36.673584 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-04-14 00:45:36.674473 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-04-14 00:45:36.674509 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-04-14 00:45:36.674533 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-04-14 00:45:36.674577 | orchestrator | 2025-04-14 00:45:36.674597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:36.675997 | orchestrator | Monday 14 April 2025 00:45:36 +0000 (0:00:00.937) 0:01:01.624 ********** 2025-04-14 00:45:36.909530 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:36.910835 | orchestrator | 2025-04-14 00:45:36.911896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:36.912773 | orchestrator | Monday 14 April 2025 00:45:36 +0000 (0:00:00.236) 0:01:01.861 ********** 2025-04-14 00:45:37.544641 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:37.544810 | orchestrator | 2025-04-14 00:45:37.545216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:37.545240 | orchestrator | Monday 14 April 2025 00:45:37 +0000 (0:00:00.636) 0:01:02.497 ********** 2025-04-14 00:45:37.766868 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:37.767078 | orchestrator | 2025-04-14 00:45:37.768524 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-04-14 00:45:37.769137 | orchestrator | Monday 14 April 2025 00:45:37 +0000 (0:00:00.221) 0:01:02.719 ********** 2025-04-14 00:45:37.983034 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:37.983775 | orchestrator | 2025-04-14 00:45:37.985274 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-04-14 00:45:37.986153 | orchestrator | Monday 14 April 2025 00:45:37 +0000 (0:00:00.215) 0:01:02.934 ********** 2025-04-14 00:45:38.138502 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:38.139008 | orchestrator | 2025-04-14 00:45:38.139967 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-04-14 00:45:38.141430 | orchestrator | Monday 14 April 2025 00:45:38 +0000 (0:00:00.156) 0:01:03.091 ********** 2025-04-14 00:45:38.391385 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b3f558b9-064d-5710-baa4-8e41f44a2baf'}}) 2025-04-14 00:45:38.395093 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}}) 2025-04-14 00:45:38.395655 | orchestrator | 2025-04-14 00:45:38.395678 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-04-14 00:45:38.395693 | orchestrator | Monday 14 April 2025 00:45:38 +0000 (0:00:00.251) 0:01:03.343 ********** 2025-04-14 00:45:40.240580 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'}) 2025-04-14 00:45:40.242722 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}) 2025-04-14 00:45:40.242821 | orchestrator | 2025-04-14 00:45:40.243049 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-04-14 00:45:40.243383 | orchestrator | Monday 14 April 2025 00:45:40 +0000 (0:00:01.845) 0:01:05.189 ********** 2025-04-14 00:45:40.420704 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:40.420972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:40.421015 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:40.421052 | orchestrator | 2025-04-14 00:45:40.422002 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-04-14 00:45:41.741764 | orchestrator | Monday 14 April 2025 00:45:40 +0000 (0:00:00.184) 0:01:05.374 ********** 2025-04-14 00:45:41.741989 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'}) 2025-04-14 00:45:41.742825 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}) 2025-04-14 00:45:41.743883 | orchestrator | 2025-04-14 00:45:41.744972 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-04-14 00:45:41.745482 | orchestrator | Monday 14 April 2025 00:45:41 +0000 (0:00:01.318) 0:01:06.693 ********** 2025-04-14 00:45:41.947759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:41.948472 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:41.949201 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:41.949841 | orchestrator | 2025-04-14 00:45:41.950315 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-04-14 00:45:41.951484 | orchestrator | Monday 14 April 2025 00:45:41 +0000 (0:00:00.207) 0:01:06.900 ********** 2025-04-14 00:45:42.321798 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:42.324402 | orchestrator | 2025-04-14 00:45:42.324489 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-04-14 00:45:42.509615 | orchestrator | Monday 14 April 2025 00:45:42 +0000 (0:00:00.373) 0:01:07.273 ********** 2025-04-14 00:45:42.509746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:42.510440 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:42.510638 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:42.511761 | orchestrator | 2025-04-14 00:45:42.511987 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-04-14 00:45:42.512636 | orchestrator | Monday 14 April 2025 00:45:42 +0000 (0:00:00.187) 0:01:07.461 ********** 2025-04-14 00:45:42.673132 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:42.673982 | orchestrator | 2025-04-14 00:45:42.674749 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-04-14 00:45:42.675659 | orchestrator | Monday 14 April 2025 00:45:42 +0000 (0:00:00.165) 0:01:07.627 ********** 2025-04-14 00:45:42.855071 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:42.855224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:42.856603 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:42.857216 | orchestrator | 2025-04-14 00:45:42.857869 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-04-14 00:45:42.858352 | orchestrator | Monday 14 April 2025 00:45:42 +0000 (0:00:00.181) 0:01:07.808 ********** 2025-04-14 00:45:43.005510 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:43.006753 | orchestrator | 2025-04-14 00:45:43.008845 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-04-14 00:45:43.009742 | orchestrator | Monday 14 April 2025 00:45:43 +0000 (0:00:00.149) 0:01:07.958 ********** 2025-04-14 00:45:43.168039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:43.168727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:43.169840 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:43.171239 | orchestrator | 2025-04-14 00:45:43.171420 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-04-14 00:45:43.172285 | orchestrator | Monday 14 April 2025 00:45:43 +0000 (0:00:00.162) 0:01:08.121 ********** 2025-04-14 00:45:43.334682 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:43.335477 | orchestrator | 2025-04-14 00:45:43.499509 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-04-14 00:45:43.499617 | orchestrator | Monday 14 April 2025 00:45:43 +0000 (0:00:00.162) 0:01:08.284 ********** 2025-04-14 00:45:43.499650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:43.500816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:43.503425 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:43.504227 | orchestrator | 2025-04-14 00:45:43.504279 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-04-14 00:45:43.506304 | orchestrator | Monday 14 April 2025 00:45:43 +0000 (0:00:00.169) 0:01:08.453 ********** 2025-04-14 00:45:43.681049 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:43.681509 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:43.681739 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:43.682568 | orchestrator | 2025-04-14 00:45:43.683010 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-04-14 00:45:43.684279 | orchestrator | Monday 14 April 2025 00:45:43 +0000 (0:00:00.182) 0:01:08.635 ********** 2025-04-14 00:45:43.847233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:43.847791 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:43.848577 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:43.851349 | orchestrator | 2025-04-14 00:45:44.000546 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-04-14 00:45:44.000654 | orchestrator | Monday 14 April 2025 00:45:43 +0000 (0:00:00.164) 0:01:08.800 ********** 2025-04-14 00:45:44.000686 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:44.001231 | orchestrator | 2025-04-14 00:45:44.002183 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-04-14 00:45:44.003107 | orchestrator | Monday 14 April 2025 00:45:43 +0000 (0:00:00.154) 0:01:08.954 ********** 2025-04-14 00:45:44.385747 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:44.386121 | orchestrator | 2025-04-14 00:45:44.386860 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-04-14 00:45:44.387047 | orchestrator | Monday 14 April 2025 00:45:44 +0000 (0:00:00.384) 0:01:09.338 ********** 2025-04-14 00:45:44.531311 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:44.532021 | orchestrator | 2025-04-14 00:45:44.532642 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-04-14 00:45:44.533279 | orchestrator | Monday 14 April 2025 00:45:44 +0000 (0:00:00.146) 0:01:09.485 ********** 2025-04-14 00:45:44.680163 | orchestrator | ok: [testbed-node-5] => { 2025-04-14 00:45:44.680496 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-04-14 00:45:44.680534 | orchestrator | } 2025-04-14 00:45:44.681099 | orchestrator | 2025-04-14 00:45:44.681818 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-04-14 00:45:44.682155 | orchestrator | Monday 14 April 2025 00:45:44 +0000 (0:00:00.148) 0:01:09.634 ********** 2025-04-14 00:45:44.834364 | orchestrator | ok: [testbed-node-5] => { 2025-04-14 00:45:44.834563 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-04-14 00:45:44.835892 | orchestrator | } 2025-04-14 00:45:44.835956 | orchestrator | 2025-04-14 00:45:44.836060 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-04-14 00:45:44.837113 | orchestrator | Monday 14 April 2025 00:45:44 +0000 (0:00:00.153) 0:01:09.788 ********** 2025-04-14 00:45:44.982705 | orchestrator | ok: [testbed-node-5] => { 2025-04-14 00:45:44.982938 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-04-14 00:45:44.983881 | orchestrator | } 2025-04-14 00:45:44.984836 | orchestrator | 2025-04-14 00:45:44.985274 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-04-14 00:45:44.986345 | orchestrator | Monday 14 April 2025 00:45:44 +0000 (0:00:00.148) 0:01:09.936 ********** 2025-04-14 00:45:45.487509 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:45.487668 | orchestrator | 2025-04-14 00:45:45.487695 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-04-14 00:45:45.487854 | orchestrator | Monday 14 April 2025 00:45:45 +0000 (0:00:00.502) 0:01:10.439 ********** 2025-04-14 00:45:45.982506 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:45.983435 | orchestrator | 2025-04-14 00:45:45.984323 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-04-14 00:45:45.985138 | orchestrator | Monday 14 April 2025 00:45:45 +0000 (0:00:00.497) 0:01:10.936 ********** 2025-04-14 00:45:46.473730 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:46.474392 | orchestrator | 2025-04-14 00:45:46.475281 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-04-14 00:45:46.476002 | orchestrator | Monday 14 April 2025 00:45:46 +0000 (0:00:00.488) 0:01:11.425 ********** 2025-04-14 00:45:46.635542 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:46.636158 | orchestrator | 2025-04-14 00:45:46.636523 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-04-14 00:45:46.638608 | orchestrator | Monday 14 April 2025 00:45:46 +0000 (0:00:00.164) 0:01:11.589 ********** 2025-04-14 00:45:46.778987 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:46.779662 | orchestrator | 2025-04-14 00:45:46.780459 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-04-14 00:45:46.781204 | orchestrator | Monday 14 April 2025 00:45:46 +0000 (0:00:00.142) 0:01:11.732 ********** 2025-04-14 00:45:46.905464 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:46.905789 | orchestrator | 2025-04-14 00:45:46.907399 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-04-14 00:45:46.908006 | orchestrator | Monday 14 April 2025 00:45:46 +0000 (0:00:00.125) 0:01:11.858 ********** 2025-04-14 00:45:47.252766 | orchestrator | ok: [testbed-node-5] => { 2025-04-14 00:45:47.253190 | orchestrator |  "vgs_report": { 2025-04-14 00:45:47.255117 | orchestrator |  "vg": [] 2025-04-14 00:45:47.255804 | orchestrator |  } 2025-04-14 00:45:47.257286 | orchestrator | } 2025-04-14 00:45:47.258229 | orchestrator | 2025-04-14 00:45:47.259474 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-04-14 00:45:47.260749 | orchestrator | Monday 14 April 2025 00:45:47 +0000 (0:00:00.347) 0:01:12.206 ********** 2025-04-14 00:45:47.400717 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:47.401331 | orchestrator | 2025-04-14 00:45:47.404430 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-04-14 00:45:47.404714 | orchestrator | Monday 14 April 2025 00:45:47 +0000 (0:00:00.147) 0:01:12.353 ********** 2025-04-14 00:45:47.559836 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:47.560270 | orchestrator | 2025-04-14 00:45:47.561197 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-04-14 00:45:47.563593 | orchestrator | Monday 14 April 2025 00:45:47 +0000 (0:00:00.159) 0:01:12.512 ********** 2025-04-14 00:45:47.706383 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:47.707428 | orchestrator | 2025-04-14 00:45:47.708161 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-04-14 00:45:47.710518 | orchestrator | Monday 14 April 2025 00:45:47 +0000 (0:00:00.147) 0:01:12.660 ********** 2025-04-14 00:45:47.861402 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:47.862054 | orchestrator | 2025-04-14 00:45:47.863753 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-04-14 00:45:47.864569 | orchestrator | Monday 14 April 2025 00:45:47 +0000 (0:00:00.154) 0:01:12.814 ********** 2025-04-14 00:45:48.002312 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:48.004156 | orchestrator | 2025-04-14 00:45:48.005835 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-04-14 00:45:48.006001 | orchestrator | Monday 14 April 2025 00:45:47 +0000 (0:00:00.141) 0:01:12.955 ********** 2025-04-14 00:45:48.146237 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:48.146876 | orchestrator | 2025-04-14 00:45:48.147881 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-04-14 00:45:48.148403 | orchestrator | Monday 14 April 2025 00:45:48 +0000 (0:00:00.144) 0:01:13.099 ********** 2025-04-14 00:45:48.294451 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:48.295133 | orchestrator | 2025-04-14 00:45:48.296008 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-04-14 00:45:48.296796 | orchestrator | Monday 14 April 2025 00:45:48 +0000 (0:00:00.147) 0:01:13.247 ********** 2025-04-14 00:45:48.440393 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:48.442130 | orchestrator | 2025-04-14 00:45:48.443078 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-04-14 00:45:48.444155 | orchestrator | Monday 14 April 2025 00:45:48 +0000 (0:00:00.144) 0:01:13.392 ********** 2025-04-14 00:45:48.588556 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:48.589952 | orchestrator | 2025-04-14 00:45:48.589999 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-04-14 00:45:48.591316 | orchestrator | Monday 14 April 2025 00:45:48 +0000 (0:00:00.147) 0:01:13.540 ********** 2025-04-14 00:45:48.744471 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:48.746155 | orchestrator | 2025-04-14 00:45:48.746855 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-04-14 00:45:48.748144 | orchestrator | Monday 14 April 2025 00:45:48 +0000 (0:00:00.156) 0:01:13.696 ********** 2025-04-14 00:45:48.898965 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:48.900289 | orchestrator | 2025-04-14 00:45:48.900332 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-04-14 00:45:48.900498 | orchestrator | Monday 14 April 2025 00:45:48 +0000 (0:00:00.154) 0:01:13.851 ********** 2025-04-14 00:45:49.252374 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:49.253117 | orchestrator | 2025-04-14 00:45:49.253875 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-04-14 00:45:49.255468 | orchestrator | Monday 14 April 2025 00:45:49 +0000 (0:00:00.352) 0:01:14.203 ********** 2025-04-14 00:45:49.398368 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:49.399215 | orchestrator | 2025-04-14 00:45:49.400656 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-04-14 00:45:49.401715 | orchestrator | Monday 14 April 2025 00:45:49 +0000 (0:00:00.147) 0:01:14.351 ********** 2025-04-14 00:45:49.548831 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:49.549374 | orchestrator | 2025-04-14 00:45:49.549955 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-04-14 00:45:49.551508 | orchestrator | Monday 14 April 2025 00:45:49 +0000 (0:00:00.149) 0:01:14.500 ********** 2025-04-14 00:45:49.738224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:49.738395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:49.738671 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:49.738701 | orchestrator | 2025-04-14 00:45:49.739062 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-04-14 00:45:49.739541 | orchestrator | Monday 14 April 2025 00:45:49 +0000 (0:00:00.190) 0:01:14.691 ********** 2025-04-14 00:45:49.948603 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:49.948876 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:49.948970 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:49.949254 | orchestrator | 2025-04-14 00:45:49.949845 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-04-14 00:45:49.950263 | orchestrator | Monday 14 April 2025 00:45:49 +0000 (0:00:00.209) 0:01:14.901 ********** 2025-04-14 00:45:50.142807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:50.143084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:50.144175 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:50.145402 | orchestrator | 2025-04-14 00:45:50.146812 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-04-14 00:45:50.147797 | orchestrator | Monday 14 April 2025 00:45:50 +0000 (0:00:00.194) 0:01:15.096 ********** 2025-04-14 00:45:50.332501 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:50.332693 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:50.333878 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:50.334613 | orchestrator | 2025-04-14 00:45:50.335692 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-04-14 00:45:50.336044 | orchestrator | Monday 14 April 2025 00:45:50 +0000 (0:00:00.188) 0:01:15.285 ********** 2025-04-14 00:45:50.511659 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:50.512417 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:50.512699 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:50.515069 | orchestrator | 2025-04-14 00:45:50.697141 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-04-14 00:45:50.697258 | orchestrator | Monday 14 April 2025 00:45:50 +0000 (0:00:00.178) 0:01:15.464 ********** 2025-04-14 00:45:50.697292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:50.697394 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:50.698671 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:50.699630 | orchestrator | 2025-04-14 00:45:50.700296 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-04-14 00:45:50.702690 | orchestrator | Monday 14 April 2025 00:45:50 +0000 (0:00:00.186) 0:01:15.650 ********** 2025-04-14 00:45:50.879940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:50.880212 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:50.882523 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:50.884635 | orchestrator | 2025-04-14 00:45:50.885354 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-04-14 00:45:50.885852 | orchestrator | Monday 14 April 2025 00:45:50 +0000 (0:00:00.182) 0:01:15.833 ********** 2025-04-14 00:45:51.041006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:51.043393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:51.044291 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:51.045391 | orchestrator | 2025-04-14 00:45:51.046269 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-04-14 00:45:51.047327 | orchestrator | Monday 14 April 2025 00:45:51 +0000 (0:00:00.161) 0:01:15.994 ********** 2025-04-14 00:45:51.757828 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:51.758250 | orchestrator | 2025-04-14 00:45:51.759918 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-04-14 00:45:51.761614 | orchestrator | Monday 14 April 2025 00:45:51 +0000 (0:00:00.714) 0:01:16.709 ********** 2025-04-14 00:45:52.263030 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:52.263221 | orchestrator | 2025-04-14 00:45:52.264026 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-04-14 00:45:52.264237 | orchestrator | Monday 14 April 2025 00:45:52 +0000 (0:00:00.507) 0:01:17.217 ********** 2025-04-14 00:45:52.422465 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:45:52.423193 | orchestrator | 2025-04-14 00:45:52.423229 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-04-14 00:45:52.424079 | orchestrator | Monday 14 April 2025 00:45:52 +0000 (0:00:00.158) 0:01:17.375 ********** 2025-04-14 00:45:52.676104 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'vg_name': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}) 2025-04-14 00:45:52.676286 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'vg_name': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'}) 2025-04-14 00:45:52.676708 | orchestrator | 2025-04-14 00:45:52.677468 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-04-14 00:45:52.679651 | orchestrator | Monday 14 April 2025 00:45:52 +0000 (0:00:00.250) 0:01:17.626 ********** 2025-04-14 00:45:52.856933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:52.858382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:52.858473 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:52.859034 | orchestrator | 2025-04-14 00:45:52.863825 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-04-14 00:45:52.864107 | orchestrator | Monday 14 April 2025 00:45:52 +0000 (0:00:00.184) 0:01:17.810 ********** 2025-04-14 00:45:53.040636 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:53.042301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:53.044867 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:53.045313 | orchestrator | 2025-04-14 00:45:53.045344 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-04-14 00:45:53.046355 | orchestrator | Monday 14 April 2025 00:45:53 +0000 (0:00:00.183) 0:01:17.994 ********** 2025-04-14 00:45:53.215381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'})  2025-04-14 00:45:53.216724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'})  2025-04-14 00:45:53.217994 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:45:53.219325 | orchestrator | 2025-04-14 00:45:53.220256 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-04-14 00:45:53.221119 | orchestrator | Monday 14 April 2025 00:45:53 +0000 (0:00:00.174) 0:01:18.168 ********** 2025-04-14 00:45:53.851579 | orchestrator | ok: [testbed-node-5] => { 2025-04-14 00:45:53.852468 | orchestrator |  "lvm_report": { 2025-04-14 00:45:53.853796 | orchestrator |  "lv": [ 2025-04-14 00:45:53.854581 | orchestrator |  { 2025-04-14 00:45:53.855379 | orchestrator |  "lv_name": "osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a", 2025-04-14 00:45:53.856511 | orchestrator |  "vg_name": "ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a" 2025-04-14 00:45:53.857320 | orchestrator |  }, 2025-04-14 00:45:53.858063 | orchestrator |  { 2025-04-14 00:45:53.858833 | orchestrator |  "lv_name": "osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf", 2025-04-14 00:45:53.859314 | orchestrator |  "vg_name": "ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf" 2025-04-14 00:45:53.860273 | orchestrator |  } 2025-04-14 00:45:53.861216 | orchestrator |  ], 2025-04-14 00:45:53.861939 | orchestrator |  "pv": [ 2025-04-14 00:45:53.862562 | orchestrator |  { 2025-04-14 00:45:53.863356 | orchestrator |  "pv_name": "/dev/sdb", 2025-04-14 00:45:53.863968 | orchestrator |  "vg_name": "ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf" 2025-04-14 00:45:53.864922 | orchestrator |  }, 2025-04-14 00:45:53.865284 | orchestrator |  { 2025-04-14 00:45:53.865792 | orchestrator |  "pv_name": "/dev/sdc", 2025-04-14 00:45:53.866214 | orchestrator |  "vg_name": "ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a" 2025-04-14 00:45:53.866790 | orchestrator |  } 2025-04-14 00:45:53.867218 | orchestrator |  ] 2025-04-14 00:45:53.867810 | orchestrator |  } 2025-04-14 00:45:53.868156 | orchestrator | } 2025-04-14 00:45:53.868728 | orchestrator | 2025-04-14 00:45:53.869197 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:45:53.869610 | orchestrator | 2025-04-14 00:45:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:45:53.869906 | orchestrator | 2025-04-14 00:45:53 | INFO  | Please wait and do not abort execution. 2025-04-14 00:45:53.870904 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-14 00:45:53.871240 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-14 00:45:53.871803 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-04-14 00:45:53.872228 | orchestrator | 2025-04-14 00:45:53.872704 | orchestrator | 2025-04-14 00:45:53.873171 | orchestrator | 2025-04-14 00:45:53.873797 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:45:53.874209 | orchestrator | Monday 14 April 2025 00:45:53 +0000 (0:00:00.635) 0:01:18.804 ********** 2025-04-14 00:45:53.874758 | orchestrator | =============================================================================== 2025-04-14 00:45:53.875186 | orchestrator | Create block VGs -------------------------------------------------------- 5.84s 2025-04-14 00:45:53.875941 | orchestrator | Create block LVs -------------------------------------------------------- 4.06s 2025-04-14 00:45:53.876238 | orchestrator | Print LVM report data --------------------------------------------------- 2.26s 2025-04-14 00:45:53.876857 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.01s 2025-04-14 00:45:53.877206 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.79s 2025-04-14 00:45:53.877702 | orchestrator | Add known links to the list of available block devices ------------------ 1.76s 2025-04-14 00:45:53.878210 | orchestrator | Add known partitions to the list of available block devices ------------- 1.71s 2025-04-14 00:45:53.878850 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2025-04-14 00:45:53.879180 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2025-04-14 00:45:53.879815 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.50s 2025-04-14 00:45:53.880180 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.15s 2025-04-14 00:45:53.880779 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2025-04-14 00:45:53.881105 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-04-14 00:45:53.881635 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.78s 2025-04-14 00:45:53.882069 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2025-04-14 00:45:53.882757 | orchestrator | Fail if number of OSDs exceeds num_osds for a WAL VG -------------------- 0.70s 2025-04-14 00:45:53.883084 | orchestrator | Combine JSON from _lvs_cmd_output/_pvs_cmd_output ----------------------- 0.70s 2025-04-14 00:45:53.883568 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-04-14 00:45:53.883992 | orchestrator | Fail if DB LV size < 30 GiB for ceph_db_wal_devices --------------------- 0.69s 2025-04-14 00:45:53.884626 | orchestrator | Print number of OSDs wanted per WAL VG ---------------------------------- 0.68s 2025-04-14 00:45:55.877376 | orchestrator | 2025-04-14 00:45:55 | INFO  | Task 25aba5f3-124f-4ba0-ad04-5b5970d64787 (facts) was prepared for execution. 2025-04-14 00:45:59.231912 | orchestrator | 2025-04-14 00:45:55 | INFO  | It takes a moment until task 25aba5f3-124f-4ba0-ad04-5b5970d64787 (facts) has been started and output is visible here. 2025-04-14 00:45:59.232060 | orchestrator | 2025-04-14 00:45:59.233425 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-04-14 00:45:59.233476 | orchestrator | 2025-04-14 00:45:59.238008 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-04-14 00:46:00.286336 | orchestrator | Monday 14 April 2025 00:45:59 +0000 (0:00:00.203) 0:00:00.203 ********** 2025-04-14 00:46:00.286499 | orchestrator | ok: [testbed-manager] 2025-04-14 00:46:00.290145 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:46:00.290239 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:46:00.290721 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:46:00.290739 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:46:00.290749 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:46:00.290764 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:46:00.291831 | orchestrator | 2025-04-14 00:46:00.292541 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-04-14 00:46:00.293571 | orchestrator | Monday 14 April 2025 00:46:00 +0000 (0:00:01.053) 0:00:01.257 ********** 2025-04-14 00:46:00.463027 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:46:00.550999 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:46:00.629967 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:46:00.716250 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:46:00.805904 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:46:01.587658 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:46:01.589366 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:46:01.591928 | orchestrator | 2025-04-14 00:46:01.592064 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-04-14 00:46:01.592092 | orchestrator | 2025-04-14 00:46:01.593981 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-04-14 00:46:06.226459 | orchestrator | Monday 14 April 2025 00:46:01 +0000 (0:00:01.305) 0:00:02.562 ********** 2025-04-14 00:46:06.226615 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:46:06.226755 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:46:06.226926 | orchestrator | ok: [testbed-manager] 2025-04-14 00:46:06.227558 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:46:06.231724 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:46:06.233980 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:46:06.236988 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:46:06.237782 | orchestrator | 2025-04-14 00:46:06.238404 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-04-14 00:46:06.239011 | orchestrator | 2025-04-14 00:46:06.239933 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-04-14 00:46:06.240212 | orchestrator | Monday 14 April 2025 00:46:06 +0000 (0:00:04.639) 0:00:07.202 ********** 2025-04-14 00:46:06.592511 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:46:06.678840 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:46:06.753326 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:46:06.838279 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:46:06.934664 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:46:06.982367 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:46:06.982693 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:46:06.982814 | orchestrator | 2025-04-14 00:46:06.983567 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:46:06.984885 | orchestrator | 2025-04-14 00:46:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-04-14 00:46:06.985034 | orchestrator | 2025-04-14 00:46:06 | INFO  | Please wait and do not abort execution. 2025-04-14 00:46:06.985064 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:46:06.987138 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:46:06.987246 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:46:06.987270 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:46:06.987945 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:46:06.988377 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:46:06.990917 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:46:06.991725 | orchestrator | 2025-04-14 00:46:06.991897 | orchestrator | Monday 14 April 2025 00:46:06 +0000 (0:00:00.757) 0:00:07.960 ********** 2025-04-14 00:46:06.992187 | orchestrator | =============================================================================== 2025-04-14 00:46:06.992458 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.64s 2025-04-14 00:46:06.992681 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.31s 2025-04-14 00:46:06.993018 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-04-14 00:46:06.994112 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.76s 2025-04-14 00:46:07.576361 | orchestrator | 2025-04-14 00:46:07.579629 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Apr 14 00:46:07 UTC 2025 2025-04-14 00:46:09.065288 | orchestrator | 2025-04-14 00:46:09.065422 | orchestrator | 2025-04-14 00:46:09 | INFO  | Collection nutshell is prepared for execution 2025-04-14 00:46:09.071936 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [0] - dotfiles 2025-04-14 00:46:09.071998 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [0] - homer 2025-04-14 00:46:09.072123 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [0] - netdata 2025-04-14 00:46:09.072146 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [0] - openstackclient 2025-04-14 00:46:09.072162 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [0] - phpmyadmin 2025-04-14 00:46:09.072178 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [0] - common 2025-04-14 00:46:09.072194 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [1] -- loadbalancer 2025-04-14 00:46:09.072228 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [2] --- opensearch 2025-04-14 00:46:09.072244 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [2] --- mariadb-ng 2025-04-14 00:46:09.072288 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [3] ---- horizon 2025-04-14 00:46:09.072303 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [3] ---- keystone 2025-04-14 00:46:09.072317 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [4] ----- neutron 2025-04-14 00:46:09.072332 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [5] ------ wait-for-nova 2025-04-14 00:46:09.072347 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [5] ------ octavia 2025-04-14 00:46:09.072387 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [4] ----- barbican 2025-04-14 00:46:09.072402 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [4] ----- designate 2025-04-14 00:46:09.072422 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [4] ----- ironic 2025-04-14 00:46:09.072539 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [4] ----- placement 2025-04-14 00:46:09.072562 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [4] ----- magnum 2025-04-14 00:46:09.072583 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [1] -- openvswitch 2025-04-14 00:46:09.072665 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [2] --- ovn 2025-04-14 00:46:09.072688 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [1] -- memcached 2025-04-14 00:46:09.072703 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [1] -- redis 2025-04-14 00:46:09.072719 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [1] -- rabbitmq-ng 2025-04-14 00:46:09.072767 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [0] - kubernetes 2025-04-14 00:46:09.072836 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [1] -- kubeconfig 2025-04-14 00:46:09.075794 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [1] -- copy-kubeconfig 2025-04-14 00:46:09.075836 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [0] - ceph 2025-04-14 00:46:09.075891 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [1] -- ceph-pools 2025-04-14 00:46:09.200606 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [2] --- copy-ceph-keys 2025-04-14 00:46:09.200710 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [3] ---- cephclient 2025-04-14 00:46:09.200726 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-04-14 00:46:09.200740 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [4] ----- wait-for-keystone 2025-04-14 00:46:09.200753 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [5] ------ kolla-ceph-rgw 2025-04-14 00:46:09.200792 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [5] ------ glance 2025-04-14 00:46:09.200806 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [5] ------ cinder 2025-04-14 00:46:09.200819 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [5] ------ nova 2025-04-14 00:46:09.200831 | orchestrator | 2025-04-14 00:46:09 | INFO  | A [4] ----- prometheus 2025-04-14 00:46:09.200844 | orchestrator | 2025-04-14 00:46:09 | INFO  | D [5] ------ grafana 2025-04-14 00:46:09.200927 | orchestrator | 2025-04-14 00:46:09 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-04-14 00:46:11.070140 | orchestrator | 2025-04-14 00:46:09 | INFO  | Tasks are running in the background 2025-04-14 00:46:11.070289 | orchestrator | 2025-04-14 00:46:11 | INFO  | No task IDs specified, wait for all currently running tasks 2025-04-14 00:46:13.170367 | orchestrator | 2025-04-14 00:46:13 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:13.170549 | orchestrator | 2025-04-14 00:46:13 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:13.171328 | orchestrator | 2025-04-14 00:46:13 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:13.171966 | orchestrator | 2025-04-14 00:46:13 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:13.173032 | orchestrator | 2025-04-14 00:46:13 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:13.176300 | orchestrator | 2025-04-14 00:46:13 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state STARTED 2025-04-14 00:46:16.234735 | orchestrator | 2025-04-14 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:16.234878 | orchestrator | 2025-04-14 00:46:16 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:16.235089 | orchestrator | 2025-04-14 00:46:16 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:16.238140 | orchestrator | 2025-04-14 00:46:16 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:16.240533 | orchestrator | 2025-04-14 00:46:16 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:16.244323 | orchestrator | 2025-04-14 00:46:16 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:16.244945 | orchestrator | 2025-04-14 00:46:16 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state STARTED 2025-04-14 00:46:19.310370 | orchestrator | 2025-04-14 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:19.310540 | orchestrator | 2025-04-14 00:46:19 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:19.314410 | orchestrator | 2025-04-14 00:46:19 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:19.322549 | orchestrator | 2025-04-14 00:46:19 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:19.322609 | orchestrator | 2025-04-14 00:46:19 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:19.332863 | orchestrator | 2025-04-14 00:46:19 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:22.412292 | orchestrator | 2025-04-14 00:46:19 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state STARTED 2025-04-14 00:46:22.412424 | orchestrator | 2025-04-14 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:22.412462 | orchestrator | 2025-04-14 00:46:22 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:22.416675 | orchestrator | 2025-04-14 00:46:22 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:22.421738 | orchestrator | 2025-04-14 00:46:22 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:22.422656 | orchestrator | 2025-04-14 00:46:22 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:22.425792 | orchestrator | 2025-04-14 00:46:22 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:22.432698 | orchestrator | 2025-04-14 00:46:22 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state STARTED 2025-04-14 00:46:25.508066 | orchestrator | 2025-04-14 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:25.508197 | orchestrator | 2025-04-14 00:46:25 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:25.511326 | orchestrator | 2025-04-14 00:46:25 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:25.511410 | orchestrator | 2025-04-14 00:46:25 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:25.514991 | orchestrator | 2025-04-14 00:46:25 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:25.515052 | orchestrator | 2025-04-14 00:46:25 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:25.515079 | orchestrator | 2025-04-14 00:46:25 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state STARTED 2025-04-14 00:46:28.613511 | orchestrator | 2025-04-14 00:46:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:28.613638 | orchestrator | 2025-04-14 00:46:28 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:28.613992 | orchestrator | 2025-04-14 00:46:28 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:28.619378 | orchestrator | 2025-04-14 00:46:28 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:28.620017 | orchestrator | 2025-04-14 00:46:28 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:28.622535 | orchestrator | 2025-04-14 00:46:28 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:28.624222 | orchestrator | 2025-04-14 00:46:28 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state STARTED 2025-04-14 00:46:31.699347 | orchestrator | 2025-04-14 00:46:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:31.699560 | orchestrator | 2025-04-14 00:46:31 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:31.707379 | orchestrator | 2025-04-14 00:46:31 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:31.708029 | orchestrator | 2025-04-14 00:46:31 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:31.717921 | orchestrator | 2025-04-14 00:46:31 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:31.721290 | orchestrator | 2025-04-14 00:46:31 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:31.724597 | orchestrator | 2025-04-14 00:46:31 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state STARTED 2025-04-14 00:46:34.838791 | orchestrator | 2025-04-14 00:46:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:34.839046 | orchestrator | 2025-04-14 00:46:34 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:34.843864 | orchestrator | 2025-04-14 00:46:34 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:34.844864 | orchestrator | 2025-04-14 00:46:34 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:34.849866 | orchestrator | 2025-04-14 00:46:34 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:34.849993 | orchestrator | 2025-04-14 00:46:34 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:34.856574 | orchestrator | 2025-04-14 00:46:34 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state STARTED 2025-04-14 00:46:37.931669 | orchestrator | 2025-04-14 00:46:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:37.931864 | orchestrator | 2025-04-14 00:46:37 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:37.932397 | orchestrator | 2025-04-14 00:46:37 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:37.937121 | orchestrator | 2025-04-14 00:46:37 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:37.938120 | orchestrator | 2025-04-14 00:46:37 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:37.938182 | orchestrator | 2025-04-14 00:46:37 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:37.940219 | orchestrator | 2025-04-14 00:46:37 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:46:37.942207 | orchestrator | 2025-04-14 00:46:37.942286 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-04-14 00:46:37.942305 | orchestrator | 2025-04-14 00:46:37.942319 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-04-14 00:46:37.942334 | orchestrator | Monday 14 April 2025 00:46:18 +0000 (0:00:00.598) 0:00:00.598 ********** 2025-04-14 00:46:37.942347 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:46:37.942361 | orchestrator | changed: [testbed-manager] 2025-04-14 00:46:37.942375 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:46:37.942388 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:46:37.942401 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:46:37.942415 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:46:37.942428 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:46:37.942441 | orchestrator | 2025-04-14 00:46:37.942455 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-04-14 00:46:37.942475 | orchestrator | Monday 14 April 2025 00:46:22 +0000 (0:00:04.135) 0:00:04.734 ********** 2025-04-14 00:46:37.942490 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-04-14 00:46:37.942503 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-04-14 00:46:37.942522 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-04-14 00:46:37.942536 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-04-14 00:46:37.942549 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-04-14 00:46:37.942563 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-04-14 00:46:37.942576 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-04-14 00:46:37.942589 | orchestrator | 2025-04-14 00:46:37.942603 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-04-14 00:46:37.942616 | orchestrator | Monday 14 April 2025 00:46:25 +0000 (0:00:03.063) 0:00:07.798 ********** 2025-04-14 00:46:37.942631 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-14 00:46:24.008891', 'end': '2025-04-14 00:46:24.016383', 'delta': '0:00:00.007492', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-14 00:46:37.942675 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-14 00:46:24.071599', 'end': '2025-04-14 00:46:24.078713', 'delta': '0:00:00.007114', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-14 00:46:37.942689 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-14 00:46:24.223581', 'end': '2025-04-14 00:46:24.228276', 'delta': '0:00:00.004695', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-14 00:46:37.942724 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-14 00:46:24.619064', 'end': '2025-04-14 00:46:24.627655', 'delta': '0:00:00.008591', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-14 00:46:37.942740 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-14 00:46:24.931642', 'end': '2025-04-14 00:46:24.939531', 'delta': '0:00:00.007889', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-14 00:46:37.942753 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-14 00:46:25.150470', 'end': '2025-04-14 00:46:25.158889', 'delta': '0:00:00.008419', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-14 00:46:37.942782 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-04-14 00:46:25.327789', 'end': '2025-04-14 00:46:25.336809', 'delta': '0:00:00.009020', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-04-14 00:46:37.942797 | orchestrator | 2025-04-14 00:46:37.942840 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-04-14 00:46:37.942855 | orchestrator | Monday 14 April 2025 00:46:29 +0000 (0:00:03.486) 0:00:11.284 ********** 2025-04-14 00:46:37.942870 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-04-14 00:46:37.942884 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-04-14 00:46:37.942898 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-04-14 00:46:37.942913 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-04-14 00:46:37.942927 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-04-14 00:46:37.942940 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-04-14 00:46:37.942954 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-04-14 00:46:37.942968 | orchestrator | 2025-04-14 00:46:37.942982 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:46:37.942995 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:46:37.943011 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:46:37.943026 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:46:37.943046 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:46:37.943083 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:46:37.943098 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:46:37.943113 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:46:37.943125 | orchestrator | 2025-04-14 00:46:37.943138 | orchestrator | Monday 14 April 2025 00:46:33 +0000 (0:00:04.400) 0:00:15.685 ********** 2025-04-14 00:46:37.943151 | orchestrator | =============================================================================== 2025-04-14 00:46:37.943211 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.40s 2025-04-14 00:46:37.943225 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.14s 2025-04-14 00:46:37.943237 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.49s 2025-04-14 00:46:37.943250 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 3.06s 2025-04-14 00:46:37.943266 | orchestrator | 2025-04-14 00:46:37 | INFO  | Task 3649b2f2-c554-425e-94b5-e54a07a6c2c7 is in state SUCCESS 2025-04-14 00:46:41.029066 | orchestrator | 2025-04-14 00:46:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:41.029206 | orchestrator | 2025-04-14 00:46:41 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:41.032240 | orchestrator | 2025-04-14 00:46:41 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:41.032289 | orchestrator | 2025-04-14 00:46:41 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:41.032313 | orchestrator | 2025-04-14 00:46:41 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:41.034105 | orchestrator | 2025-04-14 00:46:41 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:41.034193 | orchestrator | 2025-04-14 00:46:41 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:46:41.035478 | orchestrator | 2025-04-14 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:44.115608 | orchestrator | 2025-04-14 00:46:44 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:44.115914 | orchestrator | 2025-04-14 00:46:44 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:44.119881 | orchestrator | 2025-04-14 00:46:44 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:44.122340 | orchestrator | 2025-04-14 00:46:44 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:44.129524 | orchestrator | 2025-04-14 00:46:44 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:47.196440 | orchestrator | 2025-04-14 00:46:44 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:46:47.196560 | orchestrator | 2025-04-14 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:47.196617 | orchestrator | 2025-04-14 00:46:47 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:47.202154 | orchestrator | 2025-04-14 00:46:47 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:47.207043 | orchestrator | 2025-04-14 00:46:47 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:47.210535 | orchestrator | 2025-04-14 00:46:47 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:47.217018 | orchestrator | 2025-04-14 00:46:47 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:47.218989 | orchestrator | 2025-04-14 00:46:47 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:46:47.222616 | orchestrator | 2025-04-14 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:50.286342 | orchestrator | 2025-04-14 00:46:50 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:50.287733 | orchestrator | 2025-04-14 00:46:50 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:50.287833 | orchestrator | 2025-04-14 00:46:50 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:50.289415 | orchestrator | 2025-04-14 00:46:50 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:50.291980 | orchestrator | 2025-04-14 00:46:50 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:50.292336 | orchestrator | 2025-04-14 00:46:50 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:46:50.292774 | orchestrator | 2025-04-14 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:53.373062 | orchestrator | 2025-04-14 00:46:53 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:53.375643 | orchestrator | 2025-04-14 00:46:53 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:53.378683 | orchestrator | 2025-04-14 00:46:53 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:53.379771 | orchestrator | 2025-04-14 00:46:53 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:53.382164 | orchestrator | 2025-04-14 00:46:53 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:53.385280 | orchestrator | 2025-04-14 00:46:53 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:46:56.435267 | orchestrator | 2025-04-14 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:56.435416 | orchestrator | 2025-04-14 00:46:56 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:56.441345 | orchestrator | 2025-04-14 00:46:56 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:56.447937 | orchestrator | 2025-04-14 00:46:56 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:56.448835 | orchestrator | 2025-04-14 00:46:56 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:56.453580 | orchestrator | 2025-04-14 00:46:56 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:56.458987 | orchestrator | 2025-04-14 00:46:56 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:46:59.564304 | orchestrator | 2025-04-14 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:46:59.564444 | orchestrator | 2025-04-14 00:46:59 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:46:59.576706 | orchestrator | 2025-04-14 00:46:59 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:46:59.580102 | orchestrator | 2025-04-14 00:46:59 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:46:59.586528 | orchestrator | 2025-04-14 00:46:59 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:46:59.591341 | orchestrator | 2025-04-14 00:46:59 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state STARTED 2025-04-14 00:46:59.591425 | orchestrator | 2025-04-14 00:46:59 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:02.664911 | orchestrator | 2025-04-14 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:02.665058 | orchestrator | 2025-04-14 00:47:02 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:02.668278 | orchestrator | 2025-04-14 00:47:02 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:47:02.671759 | orchestrator | 2025-04-14 00:47:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:02.672878 | orchestrator | 2025-04-14 00:47:02 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:02.679191 | orchestrator | 2025-04-14 00:47:02 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:02.681011 | orchestrator | 2025-04-14 00:47:02 | INFO  | Task 3d964418-7c57-4138-98d1-2d1643d102bc is in state SUCCESS 2025-04-14 00:47:02.684616 | orchestrator | 2025-04-14 00:47:02 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:02.684717 | orchestrator | 2025-04-14 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:05.750639 | orchestrator | 2025-04-14 00:47:05 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:05.752459 | orchestrator | 2025-04-14 00:47:05 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:47:05.756614 | orchestrator | 2025-04-14 00:47:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:05.760471 | orchestrator | 2025-04-14 00:47:05 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:05.763948 | orchestrator | 2025-04-14 00:47:05 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:05.764046 | orchestrator | 2025-04-14 00:47:05 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:05.764071 | orchestrator | 2025-04-14 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:08.829256 | orchestrator | 2025-04-14 00:47:08 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:08.833401 | orchestrator | 2025-04-14 00:47:08 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:47:08.833509 | orchestrator | 2025-04-14 00:47:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:08.833823 | orchestrator | 2025-04-14 00:47:08 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:08.833859 | orchestrator | 2025-04-14 00:47:08 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:08.836242 | orchestrator | 2025-04-14 00:47:08 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:08.837162 | orchestrator | 2025-04-14 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:11.890612 | orchestrator | 2025-04-14 00:47:11 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:11.894300 | orchestrator | 2025-04-14 00:47:11 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:47:11.904063 | orchestrator | 2025-04-14 00:47:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:11.904392 | orchestrator | 2025-04-14 00:47:11 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:11.906202 | orchestrator | 2025-04-14 00:47:11 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:14.978260 | orchestrator | 2025-04-14 00:47:11 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:14.978391 | orchestrator | 2025-04-14 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:14.978430 | orchestrator | 2025-04-14 00:47:14 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:14.978905 | orchestrator | 2025-04-14 00:47:14 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:47:14.978970 | orchestrator | 2025-04-14 00:47:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:14.980932 | orchestrator | 2025-04-14 00:47:14 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:14.982310 | orchestrator | 2025-04-14 00:47:14 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:14.985122 | orchestrator | 2025-04-14 00:47:14 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:18.075687 | orchestrator | 2025-04-14 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:18.075923 | orchestrator | 2025-04-14 00:47:18 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:18.077769 | orchestrator | 2025-04-14 00:47:18 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:47:18.079239 | orchestrator | 2025-04-14 00:47:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:18.081484 | orchestrator | 2025-04-14 00:47:18 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:18.086599 | orchestrator | 2025-04-14 00:47:18 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:18.089547 | orchestrator | 2025-04-14 00:47:18 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:21.184415 | orchestrator | 2025-04-14 00:47:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:21.184556 | orchestrator | 2025-04-14 00:47:21 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:21.186396 | orchestrator | 2025-04-14 00:47:21 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:47:21.186453 | orchestrator | 2025-04-14 00:47:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:21.194668 | orchestrator | 2025-04-14 00:47:21 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:21.196911 | orchestrator | 2025-04-14 00:47:21 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:21.201171 | orchestrator | 2025-04-14 00:47:21 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:24.265505 | orchestrator | 2025-04-14 00:47:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:24.265684 | orchestrator | 2025-04-14 00:47:24 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:24.268101 | orchestrator | 2025-04-14 00:47:24 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state STARTED 2025-04-14 00:47:24.269159 | orchestrator | 2025-04-14 00:47:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:24.271093 | orchestrator | 2025-04-14 00:47:24 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:24.276129 | orchestrator | 2025-04-14 00:47:24 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:24.280076 | orchestrator | 2025-04-14 00:47:24 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:27.328360 | orchestrator | 2025-04-14 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:27.328474 | orchestrator | 2025-04-14 00:47:27 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:27.331558 | orchestrator | 2025-04-14 00:47:27 | INFO  | Task b6b38240-67fb-4f83-b691-e30dfaf1f8a7 is in state SUCCESS 2025-04-14 00:47:27.334931 | orchestrator | 2025-04-14 00:47:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:27.335848 | orchestrator | 2025-04-14 00:47:27 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:27.336446 | orchestrator | 2025-04-14 00:47:27 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:27.337121 | orchestrator | 2025-04-14 00:47:27 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:30.381980 | orchestrator | 2025-04-14 00:47:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:30.382183 | orchestrator | 2025-04-14 00:47:30 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:30.386442 | orchestrator | 2025-04-14 00:47:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:30.390869 | orchestrator | 2025-04-14 00:47:30 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:30.398367 | orchestrator | 2025-04-14 00:47:30 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state STARTED 2025-04-14 00:47:30.401157 | orchestrator | 2025-04-14 00:47:30 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:30.402829 | orchestrator | 2025-04-14 00:47:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:33.443771 | orchestrator | 2025-04-14 00:47:33 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:33.445416 | orchestrator | 2025-04-14 00:47:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:33.447440 | orchestrator | 2025-04-14 00:47:33 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:33.451481 | orchestrator | 2025-04-14 00:47:33.451535 | orchestrator | 2025-04-14 00:47:33.451551 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-04-14 00:47:33.451567 | orchestrator | 2025-04-14 00:47:33.451581 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-04-14 00:47:33.451596 | orchestrator | Monday 14 April 2025 00:46:19 +0000 (0:00:00.626) 0:00:00.626 ********** 2025-04-14 00:47:33.451610 | orchestrator | ok: [testbed-manager] => { 2025-04-14 00:47:33.451626 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-04-14 00:47:33.451642 | orchestrator | } 2025-04-14 00:47:33.451656 | orchestrator | 2025-04-14 00:47:33.451670 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-04-14 00:47:33.451684 | orchestrator | Monday 14 April 2025 00:46:19 +0000 (0:00:00.426) 0:00:01.052 ********** 2025-04-14 00:47:33.451698 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.451713 | orchestrator | 2025-04-14 00:47:33.451754 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-04-14 00:47:33.451768 | orchestrator | Monday 14 April 2025 00:46:21 +0000 (0:00:01.949) 0:00:03.002 ********** 2025-04-14 00:47:33.451782 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-04-14 00:47:33.451796 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-04-14 00:47:33.451810 | orchestrator | 2025-04-14 00:47:33.451825 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-04-14 00:47:33.451839 | orchestrator | Monday 14 April 2025 00:46:23 +0000 (0:00:01.727) 0:00:04.730 ********** 2025-04-14 00:47:33.451853 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.451867 | orchestrator | 2025-04-14 00:47:33.451881 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-04-14 00:47:33.451895 | orchestrator | Monday 14 April 2025 00:46:27 +0000 (0:00:04.709) 0:00:09.439 ********** 2025-04-14 00:47:33.451930 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.451944 | orchestrator | 2025-04-14 00:47:33.451959 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-04-14 00:47:33.451973 | orchestrator | Monday 14 April 2025 00:46:29 +0000 (0:00:01.752) 0:00:11.192 ********** 2025-04-14 00:47:33.451986 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-04-14 00:47:33.452000 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.452015 | orchestrator | 2025-04-14 00:47:33.452031 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-04-14 00:47:33.452047 | orchestrator | Monday 14 April 2025 00:46:56 +0000 (0:00:27.164) 0:00:38.357 ********** 2025-04-14 00:47:33.452062 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.452077 | orchestrator | 2025-04-14 00:47:33.452092 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:47:33.452108 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.452125 | orchestrator | 2025-04-14 00:47:33.452141 | orchestrator | Monday 14 April 2025 00:46:59 +0000 (0:00:02.492) 0:00:40.850 ********** 2025-04-14 00:47:33.452157 | orchestrator | =============================================================================== 2025-04-14 00:47:33.452173 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.16s 2025-04-14 00:47:33.452188 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.71s 2025-04-14 00:47:33.452203 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.49s 2025-04-14 00:47:33.452226 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.95s 2025-04-14 00:47:33.452243 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.75s 2025-04-14 00:47:33.452258 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.73s 2025-04-14 00:47:33.452274 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.43s 2025-04-14 00:47:33.452289 | orchestrator | 2025-04-14 00:47:33.452304 | orchestrator | 2025-04-14 00:47:33.452319 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-04-14 00:47:33.452335 | orchestrator | 2025-04-14 00:47:33.452350 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-04-14 00:47:33.452366 | orchestrator | Monday 14 April 2025 00:46:17 +0000 (0:00:00.247) 0:00:00.247 ********** 2025-04-14 00:47:33.452382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-04-14 00:47:33.452397 | orchestrator | 2025-04-14 00:47:33.452411 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-04-14 00:47:33.452425 | orchestrator | Monday 14 April 2025 00:46:18 +0000 (0:00:00.451) 0:00:00.698 ********** 2025-04-14 00:47:33.452438 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-04-14 00:47:33.452452 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-04-14 00:47:33.452466 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-04-14 00:47:33.452480 | orchestrator | 2025-04-14 00:47:33.452494 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-04-14 00:47:33.452508 | orchestrator | Monday 14 April 2025 00:46:19 +0000 (0:00:01.698) 0:00:02.396 ********** 2025-04-14 00:47:33.452521 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.452536 | orchestrator | 2025-04-14 00:47:33.452550 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-04-14 00:47:33.452564 | orchestrator | Monday 14 April 2025 00:46:21 +0000 (0:00:01.543) 0:00:03.944 ********** 2025-04-14 00:47:33.452578 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-04-14 00:47:33.452592 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.452613 | orchestrator | 2025-04-14 00:47:33.452638 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-04-14 00:47:33.452653 | orchestrator | Monday 14 April 2025 00:47:11 +0000 (0:00:50.225) 0:00:54.169 ********** 2025-04-14 00:47:33.452667 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.452681 | orchestrator | 2025-04-14 00:47:33.452695 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-04-14 00:47:33.452709 | orchestrator | Monday 14 April 2025 00:47:13 +0000 (0:00:01.880) 0:00:56.050 ********** 2025-04-14 00:47:33.452739 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.452754 | orchestrator | 2025-04-14 00:47:33.452768 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-04-14 00:47:33.452782 | orchestrator | Monday 14 April 2025 00:47:15 +0000 (0:00:02.044) 0:00:58.094 ********** 2025-04-14 00:47:33.452796 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.452810 | orchestrator | 2025-04-14 00:47:33.452824 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-04-14 00:47:33.452838 | orchestrator | Monday 14 April 2025 00:47:19 +0000 (0:00:03.851) 0:01:01.946 ********** 2025-04-14 00:47:33.452852 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.452866 | orchestrator | 2025-04-14 00:47:33.452879 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-04-14 00:47:33.452893 | orchestrator | Monday 14 April 2025 00:47:21 +0000 (0:00:01.688) 0:01:03.634 ********** 2025-04-14 00:47:33.452907 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.452920 | orchestrator | 2025-04-14 00:47:33.452934 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-04-14 00:47:33.452948 | orchestrator | Monday 14 April 2025 00:47:22 +0000 (0:00:01.212) 0:01:04.846 ********** 2025-04-14 00:47:33.452962 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.452976 | orchestrator | 2025-04-14 00:47:33.452990 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:47:33.453004 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.453018 | orchestrator | 2025-04-14 00:47:33.453031 | orchestrator | Monday 14 April 2025 00:47:22 +0000 (0:00:00.662) 0:01:05.509 ********** 2025-04-14 00:47:33.453045 | orchestrator | =============================================================================== 2025-04-14 00:47:33.453059 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 50.23s 2025-04-14 00:47:33.453073 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.85s 2025-04-14 00:47:33.453087 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 2.04s 2025-04-14 00:47:33.453106 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.88s 2025-04-14 00:47:33.453120 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.70s 2025-04-14 00:47:33.453134 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.69s 2025-04-14 00:47:33.453148 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.54s 2025-04-14 00:47:33.453162 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.21s 2025-04-14 00:47:33.453176 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.67s 2025-04-14 00:47:33.453190 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.45s 2025-04-14 00:47:33.453203 | orchestrator | 2025-04-14 00:47:33.453217 | orchestrator | 2025-04-14 00:47:33.453231 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:47:33.453245 | orchestrator | 2025-04-14 00:47:33.453258 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 00:47:33.453272 | orchestrator | Monday 14 April 2025 00:46:17 +0000 (0:00:00.482) 0:00:00.482 ********** 2025-04-14 00:47:33.453286 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-04-14 00:47:33.453307 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-04-14 00:47:33.453321 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-04-14 00:47:33.453335 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-04-14 00:47:33.453348 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-04-14 00:47:33.453362 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-04-14 00:47:33.453376 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-04-14 00:47:33.453390 | orchestrator | 2025-04-14 00:47:33.453404 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-04-14 00:47:33.453418 | orchestrator | 2025-04-14 00:47:33.453432 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-04-14 00:47:33.453446 | orchestrator | Monday 14 April 2025 00:46:19 +0000 (0:00:01.809) 0:00:02.291 ********** 2025-04-14 00:47:33.453473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:47:33.453489 | orchestrator | 2025-04-14 00:47:33.453504 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-04-14 00:47:33.453518 | orchestrator | Monday 14 April 2025 00:46:21 +0000 (0:00:02.586) 0:00:04.878 ********** 2025-04-14 00:47:33.453532 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:47:33.453546 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:47:33.453560 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.453574 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:47:33.453588 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:47:33.453602 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:47:33.453616 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:47:33.453630 | orchestrator | 2025-04-14 00:47:33.453644 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-04-14 00:47:33.453665 | orchestrator | Monday 14 April 2025 00:46:24 +0000 (0:00:03.196) 0:00:08.074 ********** 2025-04-14 00:47:33.453680 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:47:33.453693 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.453707 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:47:33.453784 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:47:33.453801 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:47:33.453815 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:47:33.453828 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:47:33.453842 | orchestrator | 2025-04-14 00:47:33.453857 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-04-14 00:47:33.453871 | orchestrator | Monday 14 April 2025 00:46:29 +0000 (0:00:04.255) 0:00:12.330 ********** 2025-04-14 00:47:33.453884 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:47:33.453905 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:47:33.453919 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.453932 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:47:33.453946 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:47:33.453960 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:47:33.453974 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:47:33.453988 | orchestrator | 2025-04-14 00:47:33.454002 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-04-14 00:47:33.454078 | orchestrator | Monday 14 April 2025 00:46:32 +0000 (0:00:02.960) 0:00:15.291 ********** 2025-04-14 00:47:33.454097 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:47:33.454111 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:47:33.454125 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.454139 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:47:33.454153 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:47:33.454167 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:47:33.454181 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:47:33.454194 | orchestrator | 2025-04-14 00:47:33.454221 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-04-14 00:47:33.454236 | orchestrator | Monday 14 April 2025 00:46:42 +0000 (0:00:10.070) 0:00:25.362 ********** 2025-04-14 00:47:33.454249 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:47:33.454263 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:47:33.454277 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:47:33.454289 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:47:33.454301 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:47:33.454314 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:47:33.454326 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.454339 | orchestrator | 2025-04-14 00:47:33.454351 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-04-14 00:47:33.454364 | orchestrator | Monday 14 April 2025 00:47:01 +0000 (0:00:19.239) 0:00:44.601 ********** 2025-04-14 00:47:33.454377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:47:33.454394 | orchestrator | 2025-04-14 00:47:33.454407 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-04-14 00:47:33.454419 | orchestrator | Monday 14 April 2025 00:47:03 +0000 (0:00:02.192) 0:00:46.794 ********** 2025-04-14 00:47:33.454431 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-04-14 00:47:33.454444 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-04-14 00:47:33.454456 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-04-14 00:47:33.454469 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-04-14 00:47:33.454481 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-04-14 00:47:33.454493 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-04-14 00:47:33.454506 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-04-14 00:47:33.454518 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-04-14 00:47:33.454530 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-04-14 00:47:33.454542 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-04-14 00:47:33.454555 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-04-14 00:47:33.454567 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-04-14 00:47:33.454580 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-04-14 00:47:33.454592 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-04-14 00:47:33.454604 | orchestrator | 2025-04-14 00:47:33.454616 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-04-14 00:47:33.454629 | orchestrator | Monday 14 April 2025 00:47:10 +0000 (0:00:07.121) 0:00:53.916 ********** 2025-04-14 00:47:33.454641 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.454654 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:47:33.454667 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:47:33.454679 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:47:33.454691 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:47:33.454704 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:47:33.454733 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:47:33.454747 | orchestrator | 2025-04-14 00:47:33.454759 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-04-14 00:47:33.454772 | orchestrator | Monday 14 April 2025 00:47:13 +0000 (0:00:02.652) 0:00:56.568 ********** 2025-04-14 00:47:33.454784 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:47:33.454796 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:47:33.454809 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.454821 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:47:33.454833 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:47:33.454846 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:47:33.454858 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:47:33.454877 | orchestrator | 2025-04-14 00:47:33.454890 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-04-14 00:47:33.454907 | orchestrator | Monday 14 April 2025 00:47:17 +0000 (0:00:03.851) 0:01:00.420 ********** 2025-04-14 00:47:33.454920 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:47:33.454932 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.454944 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:47:33.454957 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:47:33.454976 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:47:33.454989 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:47:33.455001 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:47:33.455014 | orchestrator | 2025-04-14 00:47:33.455026 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-04-14 00:47:33.455038 | orchestrator | Monday 14 April 2025 00:47:20 +0000 (0:00:03.424) 0:01:03.844 ********** 2025-04-14 00:47:33.455050 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:47:33.455063 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:47:33.455075 | orchestrator | ok: [testbed-manager] 2025-04-14 00:47:33.455087 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:47:33.455099 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:47:33.455111 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:47:33.455124 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:47:33.455136 | orchestrator | 2025-04-14 00:47:33.455148 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-04-14 00:47:33.455160 | orchestrator | Monday 14 April 2025 00:47:25 +0000 (0:00:04.907) 0:01:08.752 ********** 2025-04-14 00:47:33.455172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-04-14 00:47:33.455186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:47:33.455199 | orchestrator | 2025-04-14 00:47:33.455211 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-04-14 00:47:33.455223 | orchestrator | Monday 14 April 2025 00:47:27 +0000 (0:00:01.599) 0:01:10.352 ********** 2025-04-14 00:47:33.455235 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.455248 | orchestrator | 2025-04-14 00:47:33.455260 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-04-14 00:47:33.455272 | orchestrator | Monday 14 April 2025 00:47:29 +0000 (0:00:02.662) 0:01:13.014 ********** 2025-04-14 00:47:33.455284 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:47:33.455297 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:47:33.455310 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:47:33.455329 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:47:33.455344 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:47:33.455356 | orchestrator | changed: [testbed-manager] 2025-04-14 00:47:33.455369 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:47:33.455381 | orchestrator | 2025-04-14 00:47:33.455393 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:47:33.455406 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.455418 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.455431 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.455448 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.455461 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.455478 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.455490 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:47:33.455503 | orchestrator | 2025-04-14 00:47:33.455515 | orchestrator | Monday 14 April 2025 00:47:32 +0000 (0:00:02.974) 0:01:15.989 ********** 2025-04-14 00:47:33.455528 | orchestrator | =============================================================================== 2025-04-14 00:47:33.455540 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.24s 2025-04-14 00:47:33.455553 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.07s 2025-04-14 00:47:33.455565 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.12s 2025-04-14 00:47:33.455577 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 4.91s 2025-04-14 00:47:33.455589 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.26s 2025-04-14 00:47:33.455601 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 3.85s 2025-04-14 00:47:33.455614 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 3.43s 2025-04-14 00:47:33.455626 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.20s 2025-04-14 00:47:33.455638 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.97s 2025-04-14 00:47:33.455650 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.96s 2025-04-14 00:47:33.455663 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.66s 2025-04-14 00:47:33.455675 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.65s 2025-04-14 00:47:33.455687 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.59s 2025-04-14 00:47:33.455700 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.19s 2025-04-14 00:47:33.455729 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.81s 2025-04-14 00:47:36.494368 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.59s 2025-04-14 00:47:36.494499 | orchestrator | 2025-04-14 00:47:33 | INFO  | Task 683e41de-4e0e-4a37-a4fc-2a667eb65f8a is in state SUCCESS 2025-04-14 00:47:36.494521 | orchestrator | 2025-04-14 00:47:33 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:36.494536 | orchestrator | 2025-04-14 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:36.494569 | orchestrator | 2025-04-14 00:47:36 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:36.496318 | orchestrator | 2025-04-14 00:47:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:36.496386 | orchestrator | 2025-04-14 00:47:36 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:36.500089 | orchestrator | 2025-04-14 00:47:36 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:36.500500 | orchestrator | 2025-04-14 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:39.549077 | orchestrator | 2025-04-14 00:47:39 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:39.549264 | orchestrator | 2025-04-14 00:47:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:39.549741 | orchestrator | 2025-04-14 00:47:39 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:39.554068 | orchestrator | 2025-04-14 00:47:39 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:42.597617 | orchestrator | 2025-04-14 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:42.597861 | orchestrator | 2025-04-14 00:47:42 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:42.600557 | orchestrator | 2025-04-14 00:47:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:42.602609 | orchestrator | 2025-04-14 00:47:42 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:42.603359 | orchestrator | 2025-04-14 00:47:42 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:42.604556 | orchestrator | 2025-04-14 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:45.654913 | orchestrator | 2025-04-14 00:47:45 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:45.655096 | orchestrator | 2025-04-14 00:47:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:45.657340 | orchestrator | 2025-04-14 00:47:45 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:45.658952 | orchestrator | 2025-04-14 00:47:45 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:48.727828 | orchestrator | 2025-04-14 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:48.727958 | orchestrator | 2025-04-14 00:47:48 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:48.732281 | orchestrator | 2025-04-14 00:47:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:48.732828 | orchestrator | 2025-04-14 00:47:48 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:48.734894 | orchestrator | 2025-04-14 00:47:48 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state STARTED 2025-04-14 00:47:51.802598 | orchestrator | 2025-04-14 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:51.802780 | orchestrator | 2025-04-14 00:47:51 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:51.803808 | orchestrator | 2025-04-14 00:47:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:51.805027 | orchestrator | 2025-04-14 00:47:51 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:51.805616 | orchestrator | 2025-04-14 00:47:51 | INFO  | Task 37bf39de-0ad8-4f80-aee8-792217a553eb is in state SUCCESS 2025-04-14 00:47:51.805753 | orchestrator | 2025-04-14 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:54.869439 | orchestrator | 2025-04-14 00:47:54 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:47:54.871852 | orchestrator | 2025-04-14 00:47:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:47:57.959086 | orchestrator | 2025-04-14 00:47:54 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:47:57.959238 | orchestrator | 2025-04-14 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:47:57.959292 | orchestrator | 2025-04-14 00:47:57 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:01.011621 | orchestrator | 2025-04-14 00:47:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:01.011750 | orchestrator | 2025-04-14 00:47:57 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:01.011763 | orchestrator | 2025-04-14 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:01.011808 | orchestrator | 2025-04-14 00:48:01 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:01.014079 | orchestrator | 2025-04-14 00:48:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:01.015727 | orchestrator | 2025-04-14 00:48:01 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:04.076934 | orchestrator | 2025-04-14 00:48:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:04.077103 | orchestrator | 2025-04-14 00:48:04 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:04.077200 | orchestrator | 2025-04-14 00:48:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:04.080574 | orchestrator | 2025-04-14 00:48:04 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:07.129110 | orchestrator | 2025-04-14 00:48:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:07.129205 | orchestrator | 2025-04-14 00:48:07 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:07.129484 | orchestrator | 2025-04-14 00:48:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:07.130652 | orchestrator | 2025-04-14 00:48:07 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:10.188447 | orchestrator | 2025-04-14 00:48:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:10.188625 | orchestrator | 2025-04-14 00:48:10 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:10.190108 | orchestrator | 2025-04-14 00:48:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:10.195055 | orchestrator | 2025-04-14 00:48:10 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:13.256330 | orchestrator | 2025-04-14 00:48:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:13.256461 | orchestrator | 2025-04-14 00:48:13 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:13.256537 | orchestrator | 2025-04-14 00:48:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:13.263525 | orchestrator | 2025-04-14 00:48:13 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:16.318231 | orchestrator | 2025-04-14 00:48:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:16.318379 | orchestrator | 2025-04-14 00:48:16 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:16.319223 | orchestrator | 2025-04-14 00:48:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:16.319272 | orchestrator | 2025-04-14 00:48:16 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:19.369782 | orchestrator | 2025-04-14 00:48:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:19.369957 | orchestrator | 2025-04-14 00:48:19 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:19.371059 | orchestrator | 2025-04-14 00:48:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:19.372819 | orchestrator | 2025-04-14 00:48:19 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:22.423084 | orchestrator | 2025-04-14 00:48:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:22.423232 | orchestrator | 2025-04-14 00:48:22 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:22.423672 | orchestrator | 2025-04-14 00:48:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:22.423941 | orchestrator | 2025-04-14 00:48:22 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:25.471256 | orchestrator | 2025-04-14 00:48:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:25.471372 | orchestrator | 2025-04-14 00:48:25 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:25.472502 | orchestrator | 2025-04-14 00:48:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:25.473455 | orchestrator | 2025-04-14 00:48:25 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:25.473495 | orchestrator | 2025-04-14 00:48:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:28.524339 | orchestrator | 2025-04-14 00:48:28 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:28.524543 | orchestrator | 2025-04-14 00:48:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:28.526368 | orchestrator | 2025-04-14 00:48:28 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:28.526515 | orchestrator | 2025-04-14 00:48:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:31.592981 | orchestrator | 2025-04-14 00:48:31 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:31.593238 | orchestrator | 2025-04-14 00:48:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:31.593279 | orchestrator | 2025-04-14 00:48:31 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:31.593783 | orchestrator | 2025-04-14 00:48:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:34.670147 | orchestrator | 2025-04-14 00:48:34 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:37.726965 | orchestrator | 2025-04-14 00:48:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:37.727108 | orchestrator | 2025-04-14 00:48:34 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:37.727137 | orchestrator | 2025-04-14 00:48:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:37.727178 | orchestrator | 2025-04-14 00:48:37 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:40.806418 | orchestrator | 2025-04-14 00:48:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:40.806548 | orchestrator | 2025-04-14 00:48:37 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:40.806570 | orchestrator | 2025-04-14 00:48:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:40.806605 | orchestrator | 2025-04-14 00:48:40 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:40.806794 | orchestrator | 2025-04-14 00:48:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:40.806825 | orchestrator | 2025-04-14 00:48:40 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:43.855404 | orchestrator | 2025-04-14 00:48:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:43.855551 | orchestrator | 2025-04-14 00:48:43 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:43.856186 | orchestrator | 2025-04-14 00:48:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:43.857918 | orchestrator | 2025-04-14 00:48:43 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:43.858076 | orchestrator | 2025-04-14 00:48:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:46.893304 | orchestrator | 2025-04-14 00:48:46 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:46.893774 | orchestrator | 2025-04-14 00:48:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:46.895099 | orchestrator | 2025-04-14 00:48:46 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state STARTED 2025-04-14 00:48:46.895736 | orchestrator | 2025-04-14 00:48:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:49.964732 | orchestrator | 2025-04-14 00:48:49 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:48:49.965179 | orchestrator | 2025-04-14 00:48:49 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:49.969714 | orchestrator | 2025-04-14 00:48:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:49.970261 | orchestrator | 2025-04-14 00:48:49 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:48:49.973001 | orchestrator | 2025-04-14 00:48:49 | INFO  | Task 8cab21ab-f692-40e6-8af9-b160eb28a050 is in state SUCCESS 2025-04-14 00:48:49.975420 | orchestrator | 2025-04-14 00:48:49.975532 | orchestrator | 2025-04-14 00:48:49.975547 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-04-14 00:48:49.975567 | orchestrator | 2025-04-14 00:48:49.975577 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-04-14 00:48:49.975587 | orchestrator | Monday 14 April 2025 00:46:41 +0000 (0:00:00.441) 0:00:00.441 ********** 2025-04-14 00:48:49.975597 | orchestrator | ok: [testbed-manager] 2025-04-14 00:48:49.975607 | orchestrator | 2025-04-14 00:48:49.975617 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-04-14 00:48:49.975626 | orchestrator | Monday 14 April 2025 00:46:44 +0000 (0:00:02.709) 0:00:03.150 ********** 2025-04-14 00:48:49.975636 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-04-14 00:48:49.975653 | orchestrator | 2025-04-14 00:48:49.975663 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-04-14 00:48:49.975694 | orchestrator | Monday 14 April 2025 00:46:45 +0000 (0:00:00.970) 0:00:04.121 ********** 2025-04-14 00:48:49.975704 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.975714 | orchestrator | 2025-04-14 00:48:49.975723 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-04-14 00:48:49.975732 | orchestrator | Monday 14 April 2025 00:46:47 +0000 (0:00:02.019) 0:00:06.141 ********** 2025-04-14 00:48:49.975741 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-04-14 00:48:49.975761 | orchestrator | ok: [testbed-manager] 2025-04-14 00:48:49.975778 | orchestrator | 2025-04-14 00:48:49.975790 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-04-14 00:48:49.975799 | orchestrator | Monday 14 April 2025 00:47:46 +0000 (0:00:58.657) 0:01:04.798 ********** 2025-04-14 00:48:49.975809 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.975818 | orchestrator | 2025-04-14 00:48:49.975827 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:48:49.975837 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:48:49.975847 | orchestrator | 2025-04-14 00:48:49.975857 | orchestrator | Monday 14 April 2025 00:47:49 +0000 (0:00:03.653) 0:01:08.452 ********** 2025-04-14 00:48:49.975882 | orchestrator | =============================================================================== 2025-04-14 00:48:49.975892 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.66s 2025-04-14 00:48:49.975901 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.65s 2025-04-14 00:48:49.975910 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.71s 2025-04-14 00:48:49.975920 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.02s 2025-04-14 00:48:49.975929 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.97s 2025-04-14 00:48:49.975939 | orchestrator | 2025-04-14 00:48:49.975948 | orchestrator | 2025-04-14 00:48:49.975963 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-04-14 00:48:49.975981 | orchestrator | 2025-04-14 00:48:49.975993 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-14 00:48:49.976003 | orchestrator | Monday 14 April 2025 00:46:12 +0000 (0:00:00.358) 0:00:00.358 ********** 2025-04-14 00:48:49.976014 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:48:49.976026 | orchestrator | 2025-04-14 00:48:49.976036 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-04-14 00:48:49.976046 | orchestrator | Monday 14 April 2025 00:46:14 +0000 (0:00:01.794) 0:00:02.152 ********** 2025-04-14 00:48:49.976057 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-14 00:48:49.976067 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-14 00:48:49.976077 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-14 00:48:49.976088 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-14 00:48:49.976098 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-14 00:48:49.976109 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-14 00:48:49.976119 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-14 00:48:49.976130 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-14 00:48:49.976140 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-14 00:48:49.976150 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-14 00:48:49.976161 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-14 00:48:49.976171 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-04-14 00:48:49.976181 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-14 00:48:49.976191 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-14 00:48:49.976201 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-14 00:48:49.976215 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-14 00:48:49.976226 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-04-14 00:48:49.976247 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-14 00:48:49.976258 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-14 00:48:49.976269 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-14 00:48:49.976279 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-04-14 00:48:49.976290 | orchestrator | 2025-04-14 00:48:49.976307 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-04-14 00:48:49.976316 | orchestrator | Monday 14 April 2025 00:46:19 +0000 (0:00:04.821) 0:00:06.974 ********** 2025-04-14 00:48:49.976326 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:48:49.976340 | orchestrator | 2025-04-14 00:48:49.976349 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-04-14 00:48:49.976359 | orchestrator | Monday 14 April 2025 00:46:21 +0000 (0:00:02.365) 0:00:09.340 ********** 2025-04-14 00:48:49.976371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.976383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.976394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.976403 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.976413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.976423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.976437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976472 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.976507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976567 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.976609 | orchestrator | 2025-04-14 00:48:49.976619 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-04-14 00:48:49.976628 | orchestrator | Monday 14 April 2025 00:46:28 +0000 (0:00:06.683) 0:00:16.023 ********** 2025-04-14 00:48:49.976642 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.976653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976706 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976719 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:48:49.976729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.976739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.976781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976801 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:48:49.976811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.976821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976840 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:48:49.976849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.976859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976884 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:48:49.976893 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:48:49.976908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.976918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976937 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:48:49.976947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.976957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.976981 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:48:49.976991 | orchestrator | 2025-04-14 00:48:49.977001 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-04-14 00:48:49.977010 | orchestrator | Monday 14 April 2025 00:46:31 +0000 (0:00:02.820) 0:00:18.844 ********** 2025-04-14 00:48:49.977019 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.977034 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977430 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977452 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:48:49.977462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.977473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.977513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.977548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977567 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:48:49.977576 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:48:49.977586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.977604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977624 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:48:49.977633 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:48:49.977643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.977657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977725 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:48:49.977735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-04-14 00:48:49.977745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.977770 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:48:49.977779 | orchestrator | 2025-04-14 00:48:49.977789 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-04-14 00:48:49.977799 | orchestrator | Monday 14 April 2025 00:46:34 +0000 (0:00:03.696) 0:00:22.540 ********** 2025-04-14 00:48:49.977808 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:48:49.977817 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:48:49.977827 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:48:49.977836 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:48:49.977845 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:48:49.977854 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:48:49.977863 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:48:49.977872 | orchestrator | 2025-04-14 00:48:49.977882 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-04-14 00:48:49.977891 | orchestrator | Monday 14 April 2025 00:46:36 +0000 (0:00:01.992) 0:00:24.533 ********** 2025-04-14 00:48:49.977900 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:48:49.977910 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:48:49.977919 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:48:49.977928 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:48:49.977937 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:48:49.977946 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:48:49.977955 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:48:49.977965 | orchestrator | 2025-04-14 00:48:49.977974 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-04-14 00:48:49.977983 | orchestrator | Monday 14 April 2025 00:46:38 +0000 (0:00:01.858) 0:00:26.392 ********** 2025-04-14 00:48:49.977993 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:48:49.978002 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:48:49.978011 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:48:49.978083 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:48:49.978094 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:48:49.978104 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:48:49.978115 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.978126 | orchestrator | 2025-04-14 00:48:49.978137 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-04-14 00:48:49.978147 | orchestrator | Monday 14 April 2025 00:47:18 +0000 (0:00:39.840) 0:01:06.233 ********** 2025-04-14 00:48:49.978157 | orchestrator | ok: [testbed-manager] 2025-04-14 00:48:49.978173 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:48:49.978184 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:48:49.978194 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:48:49.978204 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:48:49.978214 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:48:49.978224 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:48:49.978252 | orchestrator | 2025-04-14 00:48:49.978263 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-14 00:48:49.978273 | orchestrator | Monday 14 April 2025 00:47:23 +0000 (0:00:04.983) 0:01:11.217 ********** 2025-04-14 00:48:49.978283 | orchestrator | ok: [testbed-manager] 2025-04-14 00:48:49.978294 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:48:49.978304 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:48:49.978314 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:48:49.978324 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:48:49.978341 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:48:49.978351 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:48:49.978362 | orchestrator | 2025-04-14 00:48:49.978372 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-04-14 00:48:49.978383 | orchestrator | Monday 14 April 2025 00:47:25 +0000 (0:00:01.871) 0:01:13.088 ********** 2025-04-14 00:48:49.978393 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:48:49.978404 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:48:49.978413 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:48:49.978422 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:48:49.978432 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:48:49.978441 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:48:49.978450 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:48:49.978459 | orchestrator | 2025-04-14 00:48:49.978468 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-04-14 00:48:49.978478 | orchestrator | Monday 14 April 2025 00:47:26 +0000 (0:00:01.274) 0:01:14.362 ********** 2025-04-14 00:48:49.978487 | orchestrator | skipping: [testbed-manager] 2025-04-14 00:48:49.978496 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:48:49.978505 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:48:49.978514 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:48:49.978523 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:48:49.978533 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:48:49.978542 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:48:49.978551 | orchestrator | 2025-04-14 00:48:49.978560 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-04-14 00:48:49.978570 | orchestrator | Monday 14 April 2025 00:47:27 +0000 (0:00:00.728) 0:01:15.091 ********** 2025-04-14 00:48:49.978579 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.978589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.978602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.978613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.978627 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.978652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.978662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978695 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.978716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978777 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978888 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.978909 | orchestrator | 2025-04-14 00:48:49.978919 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-04-14 00:48:49.978930 | orchestrator | Monday 14 April 2025 00:47:31 +0000 (0:00:04.432) 0:01:19.524 ********** 2025-04-14 00:48:49.978940 | orchestrator | [WARNING]: Skipped 2025-04-14 00:48:49.978950 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-04-14 00:48:49.978960 | orchestrator | to this access issue: 2025-04-14 00:48:49.978970 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-04-14 00:48:49.978980 | orchestrator | directory 2025-04-14 00:48:49.978991 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 00:48:49.979001 | orchestrator | 2025-04-14 00:48:49.979011 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-04-14 00:48:49.979021 | orchestrator | Monday 14 April 2025 00:47:32 +0000 (0:00:00.817) 0:01:20.342 ********** 2025-04-14 00:48:49.979031 | orchestrator | [WARNING]: Skipped 2025-04-14 00:48:49.979045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-04-14 00:48:49.979055 | orchestrator | to this access issue: 2025-04-14 00:48:49.979066 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-04-14 00:48:49.979076 | orchestrator | directory 2025-04-14 00:48:49.979086 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 00:48:49.979096 | orchestrator | 2025-04-14 00:48:49.979106 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-04-14 00:48:49.979116 | orchestrator | Monday 14 April 2025 00:47:33 +0000 (0:00:00.487) 0:01:20.829 ********** 2025-04-14 00:48:49.979127 | orchestrator | [WARNING]: Skipped 2025-04-14 00:48:49.979137 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-04-14 00:48:49.979147 | orchestrator | to this access issue: 2025-04-14 00:48:49.979157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-04-14 00:48:49.979167 | orchestrator | directory 2025-04-14 00:48:49.979177 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 00:48:49.979187 | orchestrator | 2025-04-14 00:48:49.979197 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-04-14 00:48:49.979207 | orchestrator | Monday 14 April 2025 00:47:33 +0000 (0:00:00.623) 0:01:21.452 ********** 2025-04-14 00:48:49.979217 | orchestrator | [WARNING]: Skipped 2025-04-14 00:48:49.979228 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-04-14 00:48:49.979238 | orchestrator | to this access issue: 2025-04-14 00:48:49.979248 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-04-14 00:48:49.979263 | orchestrator | directory 2025-04-14 00:48:49.979273 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 00:48:49.979283 | orchestrator | 2025-04-14 00:48:49.979293 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-04-14 00:48:49.979303 | orchestrator | Monday 14 April 2025 00:47:34 +0000 (0:00:00.869) 0:01:22.322 ********** 2025-04-14 00:48:49.979313 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.979323 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:48:49.979333 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:48:49.979343 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:48:49.979353 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:48:49.979363 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:48:49.979373 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:48:49.979383 | orchestrator | 2025-04-14 00:48:49.979394 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-04-14 00:48:49.979404 | orchestrator | Monday 14 April 2025 00:47:39 +0000 (0:00:05.352) 0:01:27.674 ********** 2025-04-14 00:48:49.979414 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-14 00:48:49.979424 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-14 00:48:49.979434 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-14 00:48:49.979444 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-14 00:48:49.979454 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-14 00:48:49.979465 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-14 00:48:49.979475 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-04-14 00:48:49.979485 | orchestrator | 2025-04-14 00:48:49.979495 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-04-14 00:48:49.979505 | orchestrator | Monday 14 April 2025 00:47:43 +0000 (0:00:03.258) 0:01:30.933 ********** 2025-04-14 00:48:49.979515 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.979525 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:48:49.979535 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:48:49.979545 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:48:49.979555 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:48:49.979569 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:48:49.979580 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:48:49.979590 | orchestrator | 2025-04-14 00:48:49.979600 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-04-14 00:48:49.979610 | orchestrator | Monday 14 April 2025 00:47:45 +0000 (0:00:02.585) 0:01:33.518 ********** 2025-04-14 00:48:49.979621 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.979635 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.979651 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.979662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.979724 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.979740 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.979752 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.979768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.979779 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.979790 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.979807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.979821 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.979832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.979842 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.979864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.979876 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.979886 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.979901 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.979912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:48:49.979925 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.979936 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.979947 | orchestrator | 2025-04-14 00:48:49.979957 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-04-14 00:48:49.979967 | orchestrator | Monday 14 April 2025 00:47:48 +0000 (0:00:03.093) 0:01:36.612 ********** 2025-04-14 00:48:49.979978 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-14 00:48:49.979988 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-14 00:48:49.979998 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-14 00:48:49.980009 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-14 00:48:49.980019 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-14 00:48:49.980029 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-14 00:48:49.980039 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-04-14 00:48:49.980049 | orchestrator | 2025-04-14 00:48:49.980060 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-04-14 00:48:49.980078 | orchestrator | Monday 14 April 2025 00:47:52 +0000 (0:00:03.297) 0:01:39.909 ********** 2025-04-14 00:48:49.980089 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-14 00:48:49.980099 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-14 00:48:49.980115 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-14 00:48:49.980125 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-14 00:48:49.980136 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-14 00:48:49.980146 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-14 00:48:49.980156 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-04-14 00:48:49.980166 | orchestrator | 2025-04-14 00:48:49.980176 | orchestrator | TASK [common : Check common containers] **************************************** 2025-04-14 00:48:49.980186 | orchestrator | Monday 14 April 2025 00:47:55 +0000 (0:00:03.018) 0:01:42.927 ********** 2025-04-14 00:48:49.980200 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.980211 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.980271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.980283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.980302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980330 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.980351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.980403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-04-14 00:48:49.980414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:48:49.980493 | orchestrator | 2025-04-14 00:48:49.980504 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-04-14 00:48:49.980514 | orchestrator | Monday 14 April 2025 00:47:59 +0000 (0:00:04.368) 0:01:47.296 ********** 2025-04-14 00:48:49.980525 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.980539 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:48:49.980549 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:48:49.980559 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:48:49.980569 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:48:49.980579 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:48:49.980589 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:48:49.980599 | orchestrator | 2025-04-14 00:48:49.980610 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-04-14 00:48:49.980620 | orchestrator | Monday 14 April 2025 00:48:01 +0000 (0:00:01.752) 0:01:49.049 ********** 2025-04-14 00:48:49.980630 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.980644 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:48:49.980654 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:48:49.980683 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:48:49.980695 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:48:49.980705 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:48:49.980715 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:48:49.980725 | orchestrator | 2025-04-14 00:48:49.980736 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-14 00:48:49.980746 | orchestrator | Monday 14 April 2025 00:48:02 +0000 (0:00:01.482) 0:01:50.532 ********** 2025-04-14 00:48:49.980756 | orchestrator | 2025-04-14 00:48:49.980766 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-14 00:48:49.980776 | orchestrator | Monday 14 April 2025 00:48:02 +0000 (0:00:00.058) 0:01:50.590 ********** 2025-04-14 00:48:49.980786 | orchestrator | 2025-04-14 00:48:49.980796 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-14 00:48:49.980806 | orchestrator | Monday 14 April 2025 00:48:02 +0000 (0:00:00.059) 0:01:50.649 ********** 2025-04-14 00:48:49.980816 | orchestrator | 2025-04-14 00:48:49.980826 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-14 00:48:49.980837 | orchestrator | Monday 14 April 2025 00:48:02 +0000 (0:00:00.057) 0:01:50.706 ********** 2025-04-14 00:48:49.980846 | orchestrator | 2025-04-14 00:48:49.980857 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-14 00:48:49.980867 | orchestrator | Monday 14 April 2025 00:48:03 +0000 (0:00:00.257) 0:01:50.964 ********** 2025-04-14 00:48:49.980877 | orchestrator | 2025-04-14 00:48:49.980887 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-14 00:48:49.980897 | orchestrator | Monday 14 April 2025 00:48:03 +0000 (0:00:00.055) 0:01:51.020 ********** 2025-04-14 00:48:49.980907 | orchestrator | 2025-04-14 00:48:49.980918 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-04-14 00:48:49.980928 | orchestrator | Monday 14 April 2025 00:48:03 +0000 (0:00:00.055) 0:01:51.075 ********** 2025-04-14 00:48:49.980938 | orchestrator | 2025-04-14 00:48:49.980948 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-04-14 00:48:49.980958 | orchestrator | Monday 14 April 2025 00:48:03 +0000 (0:00:00.092) 0:01:51.167 ********** 2025-04-14 00:48:49.980968 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:48:49.980978 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.980988 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:48:49.980998 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:48:49.981014 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:48:49.981024 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:48:49.981034 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:48:49.981044 | orchestrator | 2025-04-14 00:48:49.981054 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-04-14 00:48:49.981064 | orchestrator | Monday 14 April 2025 00:48:10 +0000 (0:00:07.481) 0:01:58.649 ********** 2025-04-14 00:48:49.981074 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:48:49.981084 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:48:49.981094 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:48:49.981104 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:48:49.981114 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:48:49.981124 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:48:49.981134 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.981144 | orchestrator | 2025-04-14 00:48:49.981154 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-04-14 00:48:49.981164 | orchestrator | Monday 14 April 2025 00:48:34 +0000 (0:00:23.854) 0:02:22.503 ********** 2025-04-14 00:48:49.981174 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:48:49.981184 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:48:49.981194 | orchestrator | ok: [testbed-manager] 2025-04-14 00:48:49.981204 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:48:49.981214 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:48:49.981224 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:48:49.981234 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:48:49.981244 | orchestrator | 2025-04-14 00:48:49.981254 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-04-14 00:48:49.981264 | orchestrator | Monday 14 April 2025 00:48:37 +0000 (0:00:02.770) 0:02:25.274 ********** 2025-04-14 00:48:49.981274 | orchestrator | changed: [testbed-manager] 2025-04-14 00:48:49.981284 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:48:49.981294 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:48:49.981304 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:48:49.981315 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:48:49.981325 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:48:49.981335 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:48:49.981345 | orchestrator | 2025-04-14 00:48:49.981355 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:48:49.981365 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 00:48:49.981376 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 00:48:49.981387 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 00:48:49.981401 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 00:48:53.032488 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 00:48:53.032587 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 00:48:53.032600 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 00:48:53.032610 | orchestrator | 2025-04-14 00:48:53.032619 | orchestrator | 2025-04-14 00:48:53.032629 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:48:53.032639 | orchestrator | Monday 14 April 2025 00:48:47 +0000 (0:00:10.147) 0:02:35.422 ********** 2025-04-14 00:48:53.032648 | orchestrator | =============================================================================== 2025-04-14 00:48:53.032709 | orchestrator | common : Ensure fluentd image is present for label check --------------- 39.84s 2025-04-14 00:48:53.032720 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 23.85s 2025-04-14 00:48:53.032740 | orchestrator | common : Restart cron container ---------------------------------------- 10.15s 2025-04-14 00:48:53.032750 | orchestrator | common : Restart fluentd container -------------------------------------- 7.48s 2025-04-14 00:48:53.032758 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.68s 2025-04-14 00:48:53.032768 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 5.35s 2025-04-14 00:48:53.032776 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 4.98s 2025-04-14 00:48:53.032785 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.82s 2025-04-14 00:48:53.032794 | orchestrator | common : Copying over config.json files for services -------------------- 4.43s 2025-04-14 00:48:53.032802 | orchestrator | common : Check common containers ---------------------------------------- 4.37s 2025-04-14 00:48:53.032811 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.70s 2025-04-14 00:48:53.032820 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.30s 2025-04-14 00:48:53.032829 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.26s 2025-04-14 00:48:53.032837 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.09s 2025-04-14 00:48:53.032847 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.02s 2025-04-14 00:48:53.032856 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.82s 2025-04-14 00:48:53.032865 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.77s 2025-04-14 00:48:53.032874 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.59s 2025-04-14 00:48:53.032883 | orchestrator | common : include_tasks -------------------------------------------------- 2.37s 2025-04-14 00:48:53.032891 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.99s 2025-04-14 00:48:53.032900 | orchestrator | 2025-04-14 00:48:49 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:48:53.032909 | orchestrator | 2025-04-14 00:48:49 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state STARTED 2025-04-14 00:48:53.032918 | orchestrator | 2025-04-14 00:48:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:53.032940 | orchestrator | 2025-04-14 00:48:53 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:48:53.033058 | orchestrator | 2025-04-14 00:48:53 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:53.033872 | orchestrator | 2025-04-14 00:48:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:53.036250 | orchestrator | 2025-04-14 00:48:53 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:48:53.037386 | orchestrator | 2025-04-14 00:48:53 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:48:53.037994 | orchestrator | 2025-04-14 00:48:53 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state STARTED 2025-04-14 00:48:53.038941 | orchestrator | 2025-04-14 00:48:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:56.081923 | orchestrator | 2025-04-14 00:48:56 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:48:56.082904 | orchestrator | 2025-04-14 00:48:56 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:56.084774 | orchestrator | 2025-04-14 00:48:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:56.088533 | orchestrator | 2025-04-14 00:48:56 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:48:56.088810 | orchestrator | 2025-04-14 00:48:56 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:48:56.088829 | orchestrator | 2025-04-14 00:48:56 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state STARTED 2025-04-14 00:48:56.089227 | orchestrator | 2025-04-14 00:48:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:48:59.174349 | orchestrator | 2025-04-14 00:48:59 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:48:59.210622 | orchestrator | 2025-04-14 00:48:59 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:48:59.212788 | orchestrator | 2025-04-14 00:48:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:48:59.218159 | orchestrator | 2025-04-14 00:48:59 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:02.297003 | orchestrator | 2025-04-14 00:48:59 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:49:02.297120 | orchestrator | 2025-04-14 00:48:59 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state STARTED 2025-04-14 00:49:02.297138 | orchestrator | 2025-04-14 00:48:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:02.297167 | orchestrator | 2025-04-14 00:49:02 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:02.297533 | orchestrator | 2025-04-14 00:49:02 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:02.298931 | orchestrator | 2025-04-14 00:49:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:02.300031 | orchestrator | 2025-04-14 00:49:02 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:02.300862 | orchestrator | 2025-04-14 00:49:02 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:49:02.301620 | orchestrator | 2025-04-14 00:49:02 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state STARTED 2025-04-14 00:49:02.301836 | orchestrator | 2025-04-14 00:49:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:05.351119 | orchestrator | 2025-04-14 00:49:05 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:05.353590 | orchestrator | 2025-04-14 00:49:05 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:05.355975 | orchestrator | 2025-04-14 00:49:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:05.358839 | orchestrator | 2025-04-14 00:49:05 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:05.365869 | orchestrator | 2025-04-14 00:49:05 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:49:05.370635 | orchestrator | 2025-04-14 00:49:05 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state STARTED 2025-04-14 00:49:08.402813 | orchestrator | 2025-04-14 00:49:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:08.402938 | orchestrator | 2025-04-14 00:49:08 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:08.403755 | orchestrator | 2025-04-14 00:49:08 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:08.405075 | orchestrator | 2025-04-14 00:49:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:08.405672 | orchestrator | 2025-04-14 00:49:08 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:08.406482 | orchestrator | 2025-04-14 00:49:08 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:49:08.407815 | orchestrator | 2025-04-14 00:49:08 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state STARTED 2025-04-14 00:49:11.449150 | orchestrator | 2025-04-14 00:49:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:11.449286 | orchestrator | 2025-04-14 00:49:11 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:11.449768 | orchestrator | 2025-04-14 00:49:11 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:11.451046 | orchestrator | 2025-04-14 00:49:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:11.453588 | orchestrator | 2025-04-14 00:49:11 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:11.455195 | orchestrator | 2025-04-14 00:49:11 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:49:11.457275 | orchestrator | 2025-04-14 00:49:11 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state STARTED 2025-04-14 00:49:11.457384 | orchestrator | 2025-04-14 00:49:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:14.509400 | orchestrator | 2025-04-14 00:49:14 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:14.510645 | orchestrator | 2025-04-14 00:49:14 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:14.510684 | orchestrator | 2025-04-14 00:49:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:14.510692 | orchestrator | 2025-04-14 00:49:14 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:14.510698 | orchestrator | 2025-04-14 00:49:14 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:14.510708 | orchestrator | 2025-04-14 00:49:14 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:49:14.512102 | orchestrator | 2025-04-14 00:49:14 | INFO  | Task 17bf9f5d-50a3-40ce-afd0-85ff3fb795fa is in state SUCCESS 2025-04-14 00:49:17.565047 | orchestrator | 2025-04-14 00:49:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:17.565189 | orchestrator | 2025-04-14 00:49:17 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:17.567977 | orchestrator | 2025-04-14 00:49:17 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:17.569374 | orchestrator | 2025-04-14 00:49:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:17.570179 | orchestrator | 2025-04-14 00:49:17 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:17.571182 | orchestrator | 2025-04-14 00:49:17 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:17.572552 | orchestrator | 2025-04-14 00:49:17 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:49:20.624346 | orchestrator | 2025-04-14 00:49:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:20.625361 | orchestrator | 2025-04-14 00:49:20 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:20.629086 | orchestrator | 2025-04-14 00:49:20 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:20.629208 | orchestrator | 2025-04-14 00:49:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:23.674998 | orchestrator | 2025-04-14 00:49:20 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:23.675110 | orchestrator | 2025-04-14 00:49:20 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:23.675127 | orchestrator | 2025-04-14 00:49:20 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state STARTED 2025-04-14 00:49:23.675142 | orchestrator | 2025-04-14 00:49:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:23.675172 | orchestrator | 2025-04-14 00:49:23 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:23.675349 | orchestrator | 2025-04-14 00:49:23 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:23.676727 | orchestrator | 2025-04-14 00:49:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:23.677947 | orchestrator | 2025-04-14 00:49:23 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:23.678956 | orchestrator | 2025-04-14 00:49:23 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:23.679579 | orchestrator | 2025-04-14 00:49:23 | INFO  | Task 47f1d7d4-ced7-453f-b350-9dbd34e9ebed is in state SUCCESS 2025-04-14 00:49:23.680635 | orchestrator | 2025-04-14 00:49:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:23.680736 | orchestrator | 2025-04-14 00:49:23.680764 | orchestrator | 2025-04-14 00:49:23.680788 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:49:23.680813 | orchestrator | 2025-04-14 00:49:23.680837 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 00:49:23.680860 | orchestrator | Monday 14 April 2025 00:48:52 +0000 (0:00:00.406) 0:00:00.406 ********** 2025-04-14 00:49:23.680885 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:49:23.680910 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:49:23.680932 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:49:23.680946 | orchestrator | 2025-04-14 00:49:23.680961 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 00:49:23.680975 | orchestrator | Monday 14 April 2025 00:48:53 +0000 (0:00:00.792) 0:00:01.198 ********** 2025-04-14 00:49:23.680990 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-04-14 00:49:23.681005 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-04-14 00:49:23.681019 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-04-14 00:49:23.681033 | orchestrator | 2025-04-14 00:49:23.681047 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-04-14 00:49:23.681061 | orchestrator | 2025-04-14 00:49:23.681075 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-04-14 00:49:23.681089 | orchestrator | Monday 14 April 2025 00:48:54 +0000 (0:00:00.682) 0:00:01.880 ********** 2025-04-14 00:49:23.681106 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:49:23.681131 | orchestrator | 2025-04-14 00:49:23.681155 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-04-14 00:49:23.681180 | orchestrator | Monday 14 April 2025 00:48:55 +0000 (0:00:01.153) 0:00:03.033 ********** 2025-04-14 00:49:23.681203 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-14 00:49:23.681217 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-14 00:49:23.681232 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-14 00:49:23.681247 | orchestrator | 2025-04-14 00:49:23.681262 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-04-14 00:49:23.681277 | orchestrator | Monday 14 April 2025 00:48:56 +0000 (0:00:01.036) 0:00:04.070 ********** 2025-04-14 00:49:23.681320 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-04-14 00:49:23.681339 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-04-14 00:49:23.681363 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-04-14 00:49:23.681388 | orchestrator | 2025-04-14 00:49:23.681413 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-04-14 00:49:23.681437 | orchestrator | Monday 14 April 2025 00:48:59 +0000 (0:00:03.060) 0:00:07.131 ********** 2025-04-14 00:49:23.681461 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:49:23.681505 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:49:23.681550 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:49:23.681575 | orchestrator | 2025-04-14 00:49:23.681609 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-04-14 00:49:23.681635 | orchestrator | Monday 14 April 2025 00:49:04 +0000 (0:00:04.857) 0:00:11.989 ********** 2025-04-14 00:49:23.681682 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:49:23.681698 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:49:23.681712 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:49:23.681726 | orchestrator | 2025-04-14 00:49:23.681741 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:49:23.681755 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:49:23.681771 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:49:23.681786 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:49:23.681800 | orchestrator | 2025-04-14 00:49:23.681814 | orchestrator | 2025-04-14 00:49:23.681828 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:49:23.681842 | orchestrator | Monday 14 April 2025 00:49:12 +0000 (0:00:08.108) 0:00:20.097 ********** 2025-04-14 00:49:23.681866 | orchestrator | =============================================================================== 2025-04-14 00:49:23.681890 | orchestrator | memcached : Restart memcached container --------------------------------- 8.11s 2025-04-14 00:49:23.681914 | orchestrator | memcached : Check memcached container ----------------------------------- 4.86s 2025-04-14 00:49:23.681938 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.06s 2025-04-14 00:49:23.681962 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.15s 2025-04-14 00:49:23.681987 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.04s 2025-04-14 00:49:23.682010 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.79s 2025-04-14 00:49:23.682095 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-04-14 00:49:23.682110 | orchestrator | 2025-04-14 00:49:23.682124 | orchestrator | 2025-04-14 00:49:23.682138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:49:23.682152 | orchestrator | 2025-04-14 00:49:23.682166 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 00:49:23.682180 | orchestrator | Monday 14 April 2025 00:48:52 +0000 (0:00:00.424) 0:00:00.424 ********** 2025-04-14 00:49:23.682194 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:49:23.682208 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:49:23.682222 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:49:23.682236 | orchestrator | 2025-04-14 00:49:23.682251 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 00:49:23.682281 | orchestrator | Monday 14 April 2025 00:48:53 +0000 (0:00:00.844) 0:00:01.269 ********** 2025-04-14 00:49:23.682296 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-04-14 00:49:23.682311 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-04-14 00:49:23.682339 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-04-14 00:49:23.682352 | orchestrator | 2025-04-14 00:49:23.682369 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-04-14 00:49:23.682394 | orchestrator | 2025-04-14 00:49:23.682421 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-04-14 00:49:23.682445 | orchestrator | Monday 14 April 2025 00:48:53 +0000 (0:00:00.425) 0:00:01.695 ********** 2025-04-14 00:49:23.682470 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:49:23.682496 | orchestrator | 2025-04-14 00:49:23.682522 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-04-14 00:49:23.682547 | orchestrator | Monday 14 April 2025 00:48:55 +0000 (0:00:01.354) 0:00:03.049 ********** 2025-04-14 00:49:23.682574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682817 | orchestrator | 2025-04-14 00:49:23.682842 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-04-14 00:49:23.682866 | orchestrator | Monday 14 April 2025 00:48:57 +0000 (0:00:02.367) 0:00:05.416 ********** 2025-04-14 00:49:23.682889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.682977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683030 | orchestrator | 2025-04-14 00:49:23.683045 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-04-14 00:49:23.683059 | orchestrator | Monday 14 April 2025 00:49:02 +0000 (0:00:05.210) 0:00:10.626 ********** 2025-04-14 00:49:23.683073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683177 | orchestrator | 2025-04-14 00:49:23.683191 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-04-14 00:49:23.683206 | orchestrator | Monday 14 April 2025 00:49:07 +0000 (0:00:04.613) 0:00:15.240 ********** 2025-04-14 00:49:23.683221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:23.683309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-04-14 00:49:26.733978 | orchestrator | 2025-04-14 00:49:26.734147 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-14 00:49:26.734167 | orchestrator | Monday 14 April 2025 00:49:09 +0000 (0:00:02.301) 0:00:17.542 ********** 2025-04-14 00:49:26.734180 | orchestrator | 2025-04-14 00:49:26.734192 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-14 00:49:26.734204 | orchestrator | Monday 14 April 2025 00:49:09 +0000 (0:00:00.075) 0:00:17.617 ********** 2025-04-14 00:49:26.734215 | orchestrator | 2025-04-14 00:49:26.734227 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-04-14 00:49:26.734238 | orchestrator | Monday 14 April 2025 00:49:09 +0000 (0:00:00.071) 0:00:17.689 ********** 2025-04-14 00:49:26.734250 | orchestrator | 2025-04-14 00:49:26.734261 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-04-14 00:49:26.734272 | orchestrator | Monday 14 April 2025 00:49:09 +0000 (0:00:00.192) 0:00:17.881 ********** 2025-04-14 00:49:26.734283 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:49:26.734296 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:49:26.734308 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:49:26.734319 | orchestrator | 2025-04-14 00:49:26.734330 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-04-14 00:49:26.734342 | orchestrator | Monday 14 April 2025 00:49:18 +0000 (0:00:08.800) 0:00:26.688 ********** 2025-04-14 00:49:26.734353 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:49:26.734364 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:49:26.734392 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:49:26.734404 | orchestrator | 2025-04-14 00:49:26.734415 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:49:26.734427 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:49:26.734439 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:49:26.734451 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:49:26.734462 | orchestrator | 2025-04-14 00:49:26.734476 | orchestrator | 2025-04-14 00:49:26.734488 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:49:26.734501 | orchestrator | Monday 14 April 2025 00:49:22 +0000 (0:00:04.014) 0:00:30.702 ********** 2025-04-14 00:49:26.734513 | orchestrator | =============================================================================== 2025-04-14 00:49:26.734526 | orchestrator | redis : Restart redis container ----------------------------------------- 8.81s 2025-04-14 00:49:26.734538 | orchestrator | redis : Copying over default config.json files -------------------------- 5.21s 2025-04-14 00:49:26.734569 | orchestrator | redis : Copying over redis config files --------------------------------- 4.61s 2025-04-14 00:49:26.734582 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.01s 2025-04-14 00:49:26.734594 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.37s 2025-04-14 00:49:26.734607 | orchestrator | redis : Check redis containers ------------------------------------------ 2.30s 2025-04-14 00:49:26.734619 | orchestrator | redis : include_tasks --------------------------------------------------- 1.35s 2025-04-14 00:49:26.734632 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2025-04-14 00:49:26.734666 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-04-14 00:49:26.734687 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.34s 2025-04-14 00:49:26.734725 | orchestrator | 2025-04-14 00:49:26 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:26.734813 | orchestrator | 2025-04-14 00:49:26 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:26.735269 | orchestrator | 2025-04-14 00:49:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:26.737341 | orchestrator | 2025-04-14 00:49:26 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:26.738110 | orchestrator | 2025-04-14 00:49:26 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:26.739937 | orchestrator | 2025-04-14 00:49:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:29.782161 | orchestrator | 2025-04-14 00:49:29 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:29.782967 | orchestrator | 2025-04-14 00:49:29 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:29.783044 | orchestrator | 2025-04-14 00:49:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:29.783549 | orchestrator | 2025-04-14 00:49:29 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:29.784390 | orchestrator | 2025-04-14 00:49:29 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:32.829374 | orchestrator | 2025-04-14 00:49:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:32.829512 | orchestrator | 2025-04-14 00:49:32 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:32.832896 | orchestrator | 2025-04-14 00:49:32 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:32.834950 | orchestrator | 2025-04-14 00:49:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:32.836980 | orchestrator | 2025-04-14 00:49:32 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:32.839345 | orchestrator | 2025-04-14 00:49:32 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:35.885365 | orchestrator | 2025-04-14 00:49:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:35.885508 | orchestrator | 2025-04-14 00:49:35 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:35.886573 | orchestrator | 2025-04-14 00:49:35 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:35.886609 | orchestrator | 2025-04-14 00:49:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:35.887622 | orchestrator | 2025-04-14 00:49:35 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:35.888045 | orchestrator | 2025-04-14 00:49:35 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:38.938533 | orchestrator | 2025-04-14 00:49:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:38.938754 | orchestrator | 2025-04-14 00:49:38 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:38.939721 | orchestrator | 2025-04-14 00:49:38 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:38.940722 | orchestrator | 2025-04-14 00:49:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:38.941723 | orchestrator | 2025-04-14 00:49:38 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:38.943091 | orchestrator | 2025-04-14 00:49:38 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:41.983711 | orchestrator | 2025-04-14 00:49:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:41.983829 | orchestrator | 2025-04-14 00:49:41 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:41.984572 | orchestrator | 2025-04-14 00:49:41 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:41.986794 | orchestrator | 2025-04-14 00:49:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:41.988015 | orchestrator | 2025-04-14 00:49:41 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:41.990709 | orchestrator | 2025-04-14 00:49:41 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:45.044236 | orchestrator | 2025-04-14 00:49:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:45.044342 | orchestrator | 2025-04-14 00:49:45 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:45.044802 | orchestrator | 2025-04-14 00:49:45 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:45.045288 | orchestrator | 2025-04-14 00:49:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:45.046190 | orchestrator | 2025-04-14 00:49:45 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:45.047087 | orchestrator | 2025-04-14 00:49:45 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:48.095461 | orchestrator | 2025-04-14 00:49:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:48.095617 | orchestrator | 2025-04-14 00:49:48 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:48.097191 | orchestrator | 2025-04-14 00:49:48 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:48.098893 | orchestrator | 2025-04-14 00:49:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:48.102450 | orchestrator | 2025-04-14 00:49:48 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:48.103030 | orchestrator | 2025-04-14 00:49:48 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:51.154707 | orchestrator | 2025-04-14 00:49:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:51.154852 | orchestrator | 2025-04-14 00:49:51 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:51.155582 | orchestrator | 2025-04-14 00:49:51 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:51.155667 | orchestrator | 2025-04-14 00:49:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:51.157116 | orchestrator | 2025-04-14 00:49:51 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:51.160055 | orchestrator | 2025-04-14 00:49:51 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:54.202612 | orchestrator | 2025-04-14 00:49:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:54.202807 | orchestrator | 2025-04-14 00:49:54 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:54.204607 | orchestrator | 2025-04-14 00:49:54 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:54.208026 | orchestrator | 2025-04-14 00:49:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:54.209898 | orchestrator | 2025-04-14 00:49:54 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:54.211855 | orchestrator | 2025-04-14 00:49:54 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:49:57.257009 | orchestrator | 2025-04-14 00:49:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:49:57.257190 | orchestrator | 2025-04-14 00:49:57 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:49:57.257264 | orchestrator | 2025-04-14 00:49:57 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:49:57.258401 | orchestrator | 2025-04-14 00:49:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:49:57.259704 | orchestrator | 2025-04-14 00:49:57 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:49:57.261278 | orchestrator | 2025-04-14 00:49:57 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:00.304140 | orchestrator | 2025-04-14 00:49:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:00.304288 | orchestrator | 2025-04-14 00:50:00 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:50:00.306010 | orchestrator | 2025-04-14 00:50:00 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:00.307122 | orchestrator | 2025-04-14 00:50:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:00.308861 | orchestrator | 2025-04-14 00:50:00 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:00.311012 | orchestrator | 2025-04-14 00:50:00 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:00.311679 | orchestrator | 2025-04-14 00:50:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:03.381080 | orchestrator | 2025-04-14 00:50:03 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:50:03.383037 | orchestrator | 2025-04-14 00:50:03 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:03.384687 | orchestrator | 2025-04-14 00:50:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:03.386400 | orchestrator | 2025-04-14 00:50:03 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:03.388718 | orchestrator | 2025-04-14 00:50:03 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:03.389157 | orchestrator | 2025-04-14 00:50:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:06.420842 | orchestrator | 2025-04-14 00:50:06 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:50:06.421058 | orchestrator | 2025-04-14 00:50:06 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:06.421828 | orchestrator | 2025-04-14 00:50:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:06.423840 | orchestrator | 2025-04-14 00:50:06 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:06.424606 | orchestrator | 2025-04-14 00:50:06 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:09.464579 | orchestrator | 2025-04-14 00:50:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:09.464733 | orchestrator | 2025-04-14 00:50:09 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:50:09.465460 | orchestrator | 2025-04-14 00:50:09 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:09.466923 | orchestrator | 2025-04-14 00:50:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:09.468138 | orchestrator | 2025-04-14 00:50:09 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:09.469097 | orchestrator | 2025-04-14 00:50:09 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:12.519682 | orchestrator | 2025-04-14 00:50:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:12.519825 | orchestrator | 2025-04-14 00:50:12 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state STARTED 2025-04-14 00:50:12.520355 | orchestrator | 2025-04-14 00:50:12 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:12.523329 | orchestrator | 2025-04-14 00:50:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:12.524429 | orchestrator | 2025-04-14 00:50:12 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:12.526119 | orchestrator | 2025-04-14 00:50:12 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:15.563951 | orchestrator | 2025-04-14 00:50:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:15.564094 | orchestrator | 2025-04-14 00:50:15 | INFO  | Task fc73846a-5482-4f79-94be-d6f6b17d3590 is in state SUCCESS 2025-04-14 00:50:15.567656 | orchestrator | 2025-04-14 00:50:15.567762 | orchestrator | 2025-04-14 00:50:15.567781 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:50:15.567797 | orchestrator | 2025-04-14 00:50:15.567811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 00:50:15.567825 | orchestrator | Monday 14 April 2025 00:48:52 +0000 (0:00:00.704) 0:00:00.704 ********** 2025-04-14 00:50:15.567839 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:50:15.567854 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:50:15.567868 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:50:15.567882 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:50:15.567896 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:50:15.567910 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:50:15.567923 | orchestrator | 2025-04-14 00:50:15.567938 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 00:50:15.567952 | orchestrator | Monday 14 April 2025 00:48:53 +0000 (0:00:01.269) 0:00:01.974 ********** 2025-04-14 00:50:15.567966 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-14 00:50:15.567980 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-14 00:50:15.567994 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-14 00:50:15.568033 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-14 00:50:15.568047 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-14 00:50:15.568075 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-04-14 00:50:15.568090 | orchestrator | 2025-04-14 00:50:15.568104 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-04-14 00:50:15.568118 | orchestrator | 2025-04-14 00:50:15.568131 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-04-14 00:50:15.568145 | orchestrator | Monday 14 April 2025 00:48:55 +0000 (0:00:01.486) 0:00:03.461 ********** 2025-04-14 00:50:15.568160 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:50:15.568177 | orchestrator | 2025-04-14 00:50:15.568193 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-14 00:50:15.568209 | orchestrator | Monday 14 April 2025 00:48:56 +0000 (0:00:01.816) 0:00:05.277 ********** 2025-04-14 00:50:15.568225 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-14 00:50:15.568240 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-14 00:50:15.568254 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-14 00:50:15.568268 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-14 00:50:15.568282 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-14 00:50:15.568296 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-14 00:50:15.568310 | orchestrator | 2025-04-14 00:50:15.568323 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-14 00:50:15.568337 | orchestrator | Monday 14 April 2025 00:48:59 +0000 (0:00:02.484) 0:00:07.762 ********** 2025-04-14 00:50:15.568351 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-04-14 00:50:15.568370 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-04-14 00:50:15.568385 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-04-14 00:50:15.568399 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-04-14 00:50:15.568412 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-04-14 00:50:15.568426 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-04-14 00:50:15.568440 | orchestrator | 2025-04-14 00:50:15.568454 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-14 00:50:15.568468 | orchestrator | Monday 14 April 2025 00:49:03 +0000 (0:00:04.462) 0:00:12.225 ********** 2025-04-14 00:50:15.568482 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-04-14 00:50:15.568496 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:50:15.568511 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-04-14 00:50:15.568524 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:50:15.568538 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-04-14 00:50:15.568552 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:50:15.568565 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-04-14 00:50:15.568579 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:50:15.568593 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-04-14 00:50:15.568606 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:50:15.568665 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-04-14 00:50:15.568692 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:50:15.568717 | orchestrator | 2025-04-14 00:50:15.568740 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-04-14 00:50:15.568764 | orchestrator | Monday 14 April 2025 00:49:07 +0000 (0:00:03.335) 0:00:15.560 ********** 2025-04-14 00:50:15.568788 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:50:15.568811 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:50:15.568835 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:50:15.568872 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:50:15.568893 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:50:15.568907 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:50:15.568921 | orchestrator | 2025-04-14 00:50:15.568935 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-04-14 00:50:15.568949 | orchestrator | Monday 14 April 2025 00:49:08 +0000 (0:00:01.282) 0:00:16.842 ********** 2025-04-14 00:50:15.568983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569018 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569206 | orchestrator | 2025-04-14 00:50:15.569220 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-04-14 00:50:15.569234 | orchestrator | Monday 14 April 2025 00:49:10 +0000 (0:00:01.828) 0:00:18.671 ********** 2025-04-14 00:50:15.569248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.569494 | orchestrator | 2025-04-14 00:50:15.569508 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-04-14 00:50:15.569523 | orchestrator | Monday 14 April 2025 00:49:13 +0000 (0:00:03.240) 0:00:21.912 ********** 2025-04-14 00:50:15.569537 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:50:15.569551 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:50:15.569565 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:50:15.569579 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:50:15.569593 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:50:15.569611 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:50:15.569662 | orchestrator | 2025-04-14 00:50:15.569685 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-04-14 00:50:15.569709 | orchestrator | Monday 14 April 2025 00:49:16 +0000 (0:00:03.224) 0:00:25.136 ********** 2025-04-14 00:50:15.569733 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:50:15.569758 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:50:15.569780 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:50:15.569801 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:50:15.569816 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:50:15.569830 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:50:15.569843 | orchestrator | 2025-04-14 00:50:15.569857 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-04-14 00:50:15.569871 | orchestrator | Monday 14 April 2025 00:49:20 +0000 (0:00:03.605) 0:00:28.741 ********** 2025-04-14 00:50:15.569885 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:50:15.569899 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:50:15.569913 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:50:15.569926 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:50:15.569949 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:50:15.569963 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:50:15.569977 | orchestrator | 2025-04-14 00:50:15.569991 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-04-14 00:50:15.570005 | orchestrator | Monday 14 April 2025 00:49:21 +0000 (0:00:01.353) 0:00:30.095 ********** 2025-04-14 00:50:15.570098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570166 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-04-14 00:50:15.570359 | orchestrator | 2025-04-14 00:50:15.570374 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-14 00:50:15.570388 | orchestrator | Monday 14 April 2025 00:49:24 +0000 (0:00:02.823) 0:00:32.922 ********** 2025-04-14 00:50:15.570402 | orchestrator | 2025-04-14 00:50:15.570417 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-14 00:50:15.570431 | orchestrator | Monday 14 April 2025 00:49:24 +0000 (0:00:00.253) 0:00:33.175 ********** 2025-04-14 00:50:15.570445 | orchestrator | 2025-04-14 00:50:15.570459 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-14 00:50:15.570473 | orchestrator | Monday 14 April 2025 00:49:25 +0000 (0:00:00.439) 0:00:33.615 ********** 2025-04-14 00:50:15.570488 | orchestrator | 2025-04-14 00:50:15.570502 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-14 00:50:15.570516 | orchestrator | Monday 14 April 2025 00:49:25 +0000 (0:00:00.116) 0:00:33.732 ********** 2025-04-14 00:50:15.570530 | orchestrator | 2025-04-14 00:50:15.570549 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-14 00:50:15.570563 | orchestrator | Monday 14 April 2025 00:49:25 +0000 (0:00:00.356) 0:00:34.089 ********** 2025-04-14 00:50:15.570577 | orchestrator | 2025-04-14 00:50:15.570591 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-04-14 00:50:15.570605 | orchestrator | Monday 14 April 2025 00:49:25 +0000 (0:00:00.143) 0:00:34.232 ********** 2025-04-14 00:50:15.570669 | orchestrator | 2025-04-14 00:50:15.570688 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-04-14 00:50:15.570704 | orchestrator | Monday 14 April 2025 00:49:26 +0000 (0:00:00.440) 0:00:34.673 ********** 2025-04-14 00:50:15.570720 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:50:15.570735 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:50:15.570751 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:50:15.570766 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:50:15.570782 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:50:15.570797 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:50:15.570812 | orchestrator | 2025-04-14 00:50:15.570827 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-04-14 00:50:15.570843 | orchestrator | Monday 14 April 2025 00:49:37 +0000 (0:00:11.082) 0:00:45.755 ********** 2025-04-14 00:50:15.570865 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:50:15.570881 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:50:15.570896 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:50:15.570911 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:50:15.570926 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:50:15.570942 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:50:15.570957 | orchestrator | 2025-04-14 00:50:15.570973 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-14 00:50:15.570988 | orchestrator | Monday 14 April 2025 00:49:39 +0000 (0:00:02.127) 0:00:47.883 ********** 2025-04-14 00:50:15.571011 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:50:15.571027 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:50:15.571044 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:50:15.571069 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:50:15.571086 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:50:15.571101 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:50:15.571116 | orchestrator | 2025-04-14 00:50:15.571131 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-04-14 00:50:15.571146 | orchestrator | Monday 14 April 2025 00:49:50 +0000 (0:00:10.736) 0:00:58.619 ********** 2025-04-14 00:50:15.571162 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-04-14 00:50:15.571177 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-04-14 00:50:15.571192 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-04-14 00:50:15.571213 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-04-14 00:50:15.571228 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-04-14 00:50:15.571243 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-04-14 00:50:15.571258 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-04-14 00:50:15.571273 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-04-14 00:50:15.571288 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-04-14 00:50:15.571303 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-04-14 00:50:15.571318 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-04-14 00:50:15.571333 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-04-14 00:50:15.571348 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-14 00:50:15.571363 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-14 00:50:15.571378 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-14 00:50:15.571393 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-14 00:50:15.571408 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-14 00:50:15.571423 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-04-14 00:50:15.571438 | orchestrator | 2025-04-14 00:50:15.571453 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-04-14 00:50:15.571468 | orchestrator | Monday 14 April 2025 00:49:58 +0000 (0:00:08.200) 0:01:06.820 ********** 2025-04-14 00:50:15.571484 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-04-14 00:50:15.571499 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:50:15.571514 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-04-14 00:50:15.571529 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:50:15.571544 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-04-14 00:50:15.571614 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:50:15.571695 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-04-14 00:50:15.571710 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-04-14 00:50:15.571723 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-04-14 00:50:15.571737 | orchestrator | 2025-04-14 00:50:15.571751 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-04-14 00:50:15.571765 | orchestrator | Monday 14 April 2025 00:50:01 +0000 (0:00:02.965) 0:01:09.785 ********** 2025-04-14 00:50:15.571779 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-04-14 00:50:15.571793 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:50:15.571807 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-04-14 00:50:15.571820 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:50:15.571834 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-04-14 00:50:15.571848 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:50:15.571863 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-04-14 00:50:15.571884 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-04-14 00:50:15.571991 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-04-14 00:50:15.572010 | orchestrator | 2025-04-14 00:50:15.572024 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-04-14 00:50:15.572039 | orchestrator | Monday 14 April 2025 00:50:05 +0000 (0:00:04.200) 0:01:13.985 ********** 2025-04-14 00:50:15.572053 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:50:15.572067 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:50:15.572081 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:50:15.572095 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:50:15.572108 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:50:15.572122 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:50:15.572136 | orchestrator | 2025-04-14 00:50:15.572149 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:50:15.572162 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:50:15.572176 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:50:15.572227 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:50:15.572241 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:50:15.572254 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:50:15.572272 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:50:15.572285 | orchestrator | 2025-04-14 00:50:15.572297 | orchestrator | 2025-04-14 00:50:15.572309 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:50:15.572322 | orchestrator | Monday 14 April 2025 00:50:14 +0000 (0:00:09.107) 0:01:23.092 ********** 2025-04-14 00:50:15.572334 | orchestrator | =============================================================================== 2025-04-14 00:50:15.572346 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.84s 2025-04-14 00:50:15.572359 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.08s 2025-04-14 00:50:15.572371 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.20s 2025-04-14 00:50:15.572383 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 4.46s 2025-04-14 00:50:15.572395 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.20s 2025-04-14 00:50:15.572419 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 3.61s 2025-04-14 00:50:15.572432 | orchestrator | module-load : Drop module persistence ----------------------------------- 3.34s 2025-04-14 00:50:15.572444 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.24s 2025-04-14 00:50:15.572457 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 3.22s 2025-04-14 00:50:15.572469 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.97s 2025-04-14 00:50:15.572485 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.83s 2025-04-14 00:50:15.572498 | orchestrator | module-load : Load modules ---------------------------------------------- 2.48s 2025-04-14 00:50:15.572510 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.13s 2025-04-14 00:50:15.572523 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.83s 2025-04-14 00:50:15.572535 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.82s 2025-04-14 00:50:15.572547 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.75s 2025-04-14 00:50:15.572560 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.49s 2025-04-14 00:50:15.572572 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.35s 2025-04-14 00:50:15.572584 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.28s 2025-04-14 00:50:15.572596 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.27s 2025-04-14 00:50:15.572609 | orchestrator | 2025-04-14 00:50:15 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:15.572644 | orchestrator | 2025-04-14 00:50:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:15.572713 | orchestrator | 2025-04-14 00:50:15 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:15.572830 | orchestrator | 2025-04-14 00:50:15 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:18.623234 | orchestrator | 2025-04-14 00:50:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:18.623345 | orchestrator | 2025-04-14 00:50:18 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:18.627328 | orchestrator | 2025-04-14 00:50:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:18.628936 | orchestrator | 2025-04-14 00:50:18 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:18.630366 | orchestrator | 2025-04-14 00:50:18 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:18.632845 | orchestrator | 2025-04-14 00:50:18 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:21.669792 | orchestrator | 2025-04-14 00:50:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:21.669998 | orchestrator | 2025-04-14 00:50:21 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:21.670170 | orchestrator | 2025-04-14 00:50:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:21.670199 | orchestrator | 2025-04-14 00:50:21 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:21.671031 | orchestrator | 2025-04-14 00:50:21 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:21.671983 | orchestrator | 2025-04-14 00:50:21 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:24.734671 | orchestrator | 2025-04-14 00:50:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:24.734843 | orchestrator | 2025-04-14 00:50:24 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:24.737243 | orchestrator | 2025-04-14 00:50:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:24.739755 | orchestrator | 2025-04-14 00:50:24 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:24.742173 | orchestrator | 2025-04-14 00:50:24 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:24.744374 | orchestrator | 2025-04-14 00:50:24 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:24.744462 | orchestrator | 2025-04-14 00:50:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:27.802158 | orchestrator | 2025-04-14 00:50:27 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:27.804767 | orchestrator | 2025-04-14 00:50:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:27.807002 | orchestrator | 2025-04-14 00:50:27 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:27.810115 | orchestrator | 2025-04-14 00:50:27 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:27.811750 | orchestrator | 2025-04-14 00:50:27 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:27.811901 | orchestrator | 2025-04-14 00:50:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:30.859301 | orchestrator | 2025-04-14 00:50:30 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:30.859912 | orchestrator | 2025-04-14 00:50:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:30.859972 | orchestrator | 2025-04-14 00:50:30 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:30.860168 | orchestrator | 2025-04-14 00:50:30 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:30.862175 | orchestrator | 2025-04-14 00:50:30 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:33.904172 | orchestrator | 2025-04-14 00:50:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:33.904314 | orchestrator | 2025-04-14 00:50:33 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:33.906440 | orchestrator | 2025-04-14 00:50:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:33.909365 | orchestrator | 2025-04-14 00:50:33 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:33.911833 | orchestrator | 2025-04-14 00:50:33 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:33.914130 | orchestrator | 2025-04-14 00:50:33 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:33.914523 | orchestrator | 2025-04-14 00:50:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:36.957677 | orchestrator | 2025-04-14 00:50:36 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:36.959472 | orchestrator | 2025-04-14 00:50:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:36.966136 | orchestrator | 2025-04-14 00:50:36 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:36.970879 | orchestrator | 2025-04-14 00:50:36 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:36.971273 | orchestrator | 2025-04-14 00:50:36 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:40.044420 | orchestrator | 2025-04-14 00:50:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:40.044550 | orchestrator | 2025-04-14 00:50:40 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:40.045376 | orchestrator | 2025-04-14 00:50:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:40.046588 | orchestrator | 2025-04-14 00:50:40 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:40.047351 | orchestrator | 2025-04-14 00:50:40 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:40.047978 | orchestrator | 2025-04-14 00:50:40 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:40.048180 | orchestrator | 2025-04-14 00:50:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:43.094869 | orchestrator | 2025-04-14 00:50:43 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:43.095262 | orchestrator | 2025-04-14 00:50:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:43.096815 | orchestrator | 2025-04-14 00:50:43 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:43.097639 | orchestrator | 2025-04-14 00:50:43 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:43.099868 | orchestrator | 2025-04-14 00:50:43 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:43.100526 | orchestrator | 2025-04-14 00:50:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:46.136548 | orchestrator | 2025-04-14 00:50:46 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:46.137950 | orchestrator | 2025-04-14 00:50:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:46.140271 | orchestrator | 2025-04-14 00:50:46 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:46.141060 | orchestrator | 2025-04-14 00:50:46 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:46.144805 | orchestrator | 2025-04-14 00:50:46 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:49.186971 | orchestrator | 2025-04-14 00:50:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:49.187219 | orchestrator | 2025-04-14 00:50:49 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:49.187361 | orchestrator | 2025-04-14 00:50:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:49.188156 | orchestrator | 2025-04-14 00:50:49 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:49.189405 | orchestrator | 2025-04-14 00:50:49 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:49.190887 | orchestrator | 2025-04-14 00:50:49 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:52.243519 | orchestrator | 2025-04-14 00:50:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:52.243690 | orchestrator | 2025-04-14 00:50:52 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:52.245925 | orchestrator | 2025-04-14 00:50:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:52.248525 | orchestrator | 2025-04-14 00:50:52 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:52.251849 | orchestrator | 2025-04-14 00:50:52 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:52.255638 | orchestrator | 2025-04-14 00:50:52 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:55.301989 | orchestrator | 2025-04-14 00:50:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:55.302216 | orchestrator | 2025-04-14 00:50:55 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:55.302352 | orchestrator | 2025-04-14 00:50:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:55.302381 | orchestrator | 2025-04-14 00:50:55 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:55.304913 | orchestrator | 2025-04-14 00:50:55 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:55.305203 | orchestrator | 2025-04-14 00:50:55 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:50:55.305322 | orchestrator | 2025-04-14 00:50:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:50:58.353055 | orchestrator | 2025-04-14 00:50:58 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:50:58.357791 | orchestrator | 2025-04-14 00:50:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:50:58.358533 | orchestrator | 2025-04-14 00:50:58 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:50:58.360735 | orchestrator | 2025-04-14 00:50:58 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:50:58.362726 | orchestrator | 2025-04-14 00:50:58 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:01.405251 | orchestrator | 2025-04-14 00:50:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:01.405398 | orchestrator | 2025-04-14 00:51:01 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:01.406190 | orchestrator | 2025-04-14 00:51:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:01.409052 | orchestrator | 2025-04-14 00:51:01 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:01.410087 | orchestrator | 2025-04-14 00:51:01 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:01.419154 | orchestrator | 2025-04-14 00:51:01 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:04.467112 | orchestrator | 2025-04-14 00:51:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:04.467277 | orchestrator | 2025-04-14 00:51:04 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:04.467956 | orchestrator | 2025-04-14 00:51:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:04.469421 | orchestrator | 2025-04-14 00:51:04 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:04.470705 | orchestrator | 2025-04-14 00:51:04 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:04.471454 | orchestrator | 2025-04-14 00:51:04 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:04.471766 | orchestrator | 2025-04-14 00:51:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:07.518486 | orchestrator | 2025-04-14 00:51:07 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:07.519089 | orchestrator | 2025-04-14 00:51:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:07.519155 | orchestrator | 2025-04-14 00:51:07 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:07.520150 | orchestrator | 2025-04-14 00:51:07 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:07.520660 | orchestrator | 2025-04-14 00:51:07 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:10.570218 | orchestrator | 2025-04-14 00:51:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:10.570365 | orchestrator | 2025-04-14 00:51:10 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:10.570547 | orchestrator | 2025-04-14 00:51:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:10.570836 | orchestrator | 2025-04-14 00:51:10 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:10.577447 | orchestrator | 2025-04-14 00:51:10 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:10.580912 | orchestrator | 2025-04-14 00:51:10 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:13.621741 | orchestrator | 2025-04-14 00:51:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:13.621909 | orchestrator | 2025-04-14 00:51:13 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:13.622266 | orchestrator | 2025-04-14 00:51:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:13.622331 | orchestrator | 2025-04-14 00:51:13 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:13.626735 | orchestrator | 2025-04-14 00:51:13 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:16.664875 | orchestrator | 2025-04-14 00:51:13 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:16.664990 | orchestrator | 2025-04-14 00:51:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:16.665023 | orchestrator | 2025-04-14 00:51:16 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:16.666620 | orchestrator | 2025-04-14 00:51:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:16.668198 | orchestrator | 2025-04-14 00:51:16 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:16.669964 | orchestrator | 2025-04-14 00:51:16 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:16.671428 | orchestrator | 2025-04-14 00:51:16 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:16.671574 | orchestrator | 2025-04-14 00:51:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:19.721109 | orchestrator | 2025-04-14 00:51:19 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:19.721521 | orchestrator | 2025-04-14 00:51:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:19.721662 | orchestrator | 2025-04-14 00:51:19 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:19.722401 | orchestrator | 2025-04-14 00:51:19 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:19.723648 | orchestrator | 2025-04-14 00:51:19 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:22.767302 | orchestrator | 2025-04-14 00:51:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:22.767437 | orchestrator | 2025-04-14 00:51:22 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:22.768779 | orchestrator | 2025-04-14 00:51:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:22.770603 | orchestrator | 2025-04-14 00:51:22 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:22.771446 | orchestrator | 2025-04-14 00:51:22 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:22.773446 | orchestrator | 2025-04-14 00:51:22 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:25.810784 | orchestrator | 2025-04-14 00:51:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:25.810917 | orchestrator | 2025-04-14 00:51:25 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:25.811076 | orchestrator | 2025-04-14 00:51:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:25.812072 | orchestrator | 2025-04-14 00:51:25 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:25.813063 | orchestrator | 2025-04-14 00:51:25 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:25.813785 | orchestrator | 2025-04-14 00:51:25 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:28.872043 | orchestrator | 2025-04-14 00:51:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:28.872192 | orchestrator | 2025-04-14 00:51:28 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:28.872328 | orchestrator | 2025-04-14 00:51:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:28.873853 | orchestrator | 2025-04-14 00:51:28 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:28.874443 | orchestrator | 2025-04-14 00:51:28 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:28.876745 | orchestrator | 2025-04-14 00:51:28 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:31.932851 | orchestrator | 2025-04-14 00:51:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:31.932986 | orchestrator | 2025-04-14 00:51:31 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:31.933118 | orchestrator | 2025-04-14 00:51:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:31.933620 | orchestrator | 2025-04-14 00:51:31 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:31.934463 | orchestrator | 2025-04-14 00:51:31 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:31.935223 | orchestrator | 2025-04-14 00:51:31 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:34.981953 | orchestrator | 2025-04-14 00:51:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:34.982166 | orchestrator | 2025-04-14 00:51:34 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:34.982344 | orchestrator | 2025-04-14 00:51:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:34.983813 | orchestrator | 2025-04-14 00:51:34 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:34.987745 | orchestrator | 2025-04-14 00:51:34 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:34.991385 | orchestrator | 2025-04-14 00:51:34 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state STARTED 2025-04-14 00:51:34.997064 | orchestrator | 2025-04-14 00:51:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:38.047709 | orchestrator | 2025-04-14 00:51:38 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:38.048413 | orchestrator | 2025-04-14 00:51:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:38.051705 | orchestrator | 2025-04-14 00:51:38 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:38.052402 | orchestrator | 2025-04-14 00:51:38 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:38.053446 | orchestrator | 2025-04-14 00:51:38 | INFO  | Task 658692f6-36b7-40d3-83bb-d54a3524f7c5 is in state SUCCESS 2025-04-14 00:51:38.055231 | orchestrator | 2025-04-14 00:51:38.055280 | orchestrator | 2025-04-14 00:51:38.055296 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-04-14 00:51:38.055312 | orchestrator | 2025-04-14 00:51:38.055327 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-14 00:51:38.055342 | orchestrator | Monday 14 April 2025 00:49:19 +0000 (0:00:00.397) 0:00:00.397 ********** 2025-04-14 00:51:38.055358 | orchestrator | ok: [localhost] => { 2025-04-14 00:51:38.055376 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-04-14 00:51:38.055391 | orchestrator | } 2025-04-14 00:51:38.055406 | orchestrator | 2025-04-14 00:51:38.055422 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-04-14 00:51:38.055436 | orchestrator | Monday 14 April 2025 00:49:19 +0000 (0:00:00.082) 0:00:00.480 ********** 2025-04-14 00:51:38.055452 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-04-14 00:51:38.055468 | orchestrator | ...ignoring 2025-04-14 00:51:38.055483 | orchestrator | 2025-04-14 00:51:38.055498 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-04-14 00:51:38.055513 | orchestrator | Monday 14 April 2025 00:49:22 +0000 (0:00:02.782) 0:00:03.262 ********** 2025-04-14 00:51:38.055528 | orchestrator | skipping: [localhost] 2025-04-14 00:51:38.055544 | orchestrator | 2025-04-14 00:51:38.055592 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-04-14 00:51:38.055608 | orchestrator | Monday 14 April 2025 00:49:22 +0000 (0:00:00.069) 0:00:03.331 ********** 2025-04-14 00:51:38.055622 | orchestrator | ok: [localhost] 2025-04-14 00:51:38.055636 | orchestrator | 2025-04-14 00:51:38.055650 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:51:38.055664 | orchestrator | 2025-04-14 00:51:38.055678 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 00:51:38.055693 | orchestrator | Monday 14 April 2025 00:49:22 +0000 (0:00:00.175) 0:00:03.507 ********** 2025-04-14 00:51:38.055707 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:51:38.055721 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:51:38.055735 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:51:38.055749 | orchestrator | 2025-04-14 00:51:38.055763 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 00:51:38.055777 | orchestrator | Monday 14 April 2025 00:49:23 +0000 (0:00:00.692) 0:00:04.199 ********** 2025-04-14 00:51:38.055791 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-04-14 00:51:38.055808 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-04-14 00:51:38.055823 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-04-14 00:51:38.055858 | orchestrator | 2025-04-14 00:51:38.055874 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-04-14 00:51:38.055890 | orchestrator | 2025-04-14 00:51:38.055906 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-14 00:51:38.055921 | orchestrator | Monday 14 April 2025 00:49:23 +0000 (0:00:00.708) 0:00:04.907 ********** 2025-04-14 00:51:38.055937 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:51:38.055954 | orchestrator | 2025-04-14 00:51:38.055970 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-14 00:51:38.055986 | orchestrator | Monday 14 April 2025 00:49:25 +0000 (0:00:01.131) 0:00:06.038 ********** 2025-04-14 00:51:38.056001 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:51:38.056015 | orchestrator | 2025-04-14 00:51:38.056029 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-04-14 00:51:38.056044 | orchestrator | Monday 14 April 2025 00:49:26 +0000 (0:00:01.309) 0:00:07.348 ********** 2025-04-14 00:51:38.056057 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:51:38.056072 | orchestrator | 2025-04-14 00:51:38.056087 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-04-14 00:51:38.056108 | orchestrator | Monday 14 April 2025 00:49:27 +0000 (0:00:01.284) 0:00:08.632 ********** 2025-04-14 00:51:38.056123 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:51:38.056137 | orchestrator | 2025-04-14 00:51:38.056151 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-04-14 00:51:38.056165 | orchestrator | Monday 14 April 2025 00:49:29 +0000 (0:00:01.872) 0:00:10.505 ********** 2025-04-14 00:51:38.056179 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:51:38.056193 | orchestrator | 2025-04-14 00:51:38.056207 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-04-14 00:51:38.056221 | orchestrator | Monday 14 April 2025 00:49:29 +0000 (0:00:00.408) 0:00:10.913 ********** 2025-04-14 00:51:38.056236 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:51:38.056250 | orchestrator | 2025-04-14 00:51:38.056264 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-14 00:51:38.056278 | orchestrator | Monday 14 April 2025 00:49:30 +0000 (0:00:00.395) 0:00:11.309 ********** 2025-04-14 00:51:38.056292 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:51:38.056306 | orchestrator | 2025-04-14 00:51:38.056320 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-04-14 00:51:38.056334 | orchestrator | Monday 14 April 2025 00:49:31 +0000 (0:00:01.066) 0:00:12.375 ********** 2025-04-14 00:51:38.056348 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:51:38.056362 | orchestrator | 2025-04-14 00:51:38.056376 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-04-14 00:51:38.056390 | orchestrator | Monday 14 April 2025 00:49:32 +0000 (0:00:00.809) 0:00:13.185 ********** 2025-04-14 00:51:38.056403 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:51:38.056487 | orchestrator | 2025-04-14 00:51:38.056503 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-04-14 00:51:38.056517 | orchestrator | Monday 14 April 2025 00:49:32 +0000 (0:00:00.382) 0:00:13.567 ********** 2025-04-14 00:51:38.056531 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:51:38.056545 | orchestrator | 2025-04-14 00:51:38.056591 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-04-14 00:51:38.056606 | orchestrator | Monday 14 April 2025 00:49:33 +0000 (0:00:00.474) 0:00:14.042 ********** 2025-04-14 00:51:38.056623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.056648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.056663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.056678 | orchestrator | 2025-04-14 00:51:38.056693 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-04-14 00:51:38.056707 | orchestrator | Monday 14 April 2025 00:49:34 +0000 (0:00:01.015) 0:00:15.058 ********** 2025-04-14 00:51:38.056730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.056754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.056769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.056784 | orchestrator | 2025-04-14 00:51:38.056798 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-04-14 00:51:38.056812 | orchestrator | Monday 14 April 2025 00:49:35 +0000 (0:00:01.743) 0:00:16.801 ********** 2025-04-14 00:51:38.056826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-14 00:51:38.056840 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-14 00:51:38.056855 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-04-14 00:51:38.056869 | orchestrator | 2025-04-14 00:51:38.056883 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-04-14 00:51:38.056897 | orchestrator | Monday 14 April 2025 00:49:38 +0000 (0:00:03.009) 0:00:19.810 ********** 2025-04-14 00:51:38.056911 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-14 00:51:38.056934 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-14 00:51:38.056949 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-04-14 00:51:38.056963 | orchestrator | 2025-04-14 00:51:38.056977 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-04-14 00:51:38.056996 | orchestrator | Monday 14 April 2025 00:49:43 +0000 (0:00:04.628) 0:00:24.439 ********** 2025-04-14 00:51:38.057010 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-14 00:51:38.057023 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-14 00:51:38.057044 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-04-14 00:51:38.057058 | orchestrator | 2025-04-14 00:51:38.057078 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-04-14 00:51:38.057093 | orchestrator | Monday 14 April 2025 00:49:45 +0000 (0:00:01.723) 0:00:26.162 ********** 2025-04-14 00:51:38.057107 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-14 00:51:38.057121 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-14 00:51:38.057135 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-04-14 00:51:38.057149 | orchestrator | 2025-04-14 00:51:38.057163 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-04-14 00:51:38.057177 | orchestrator | Monday 14 April 2025 00:49:47 +0000 (0:00:02.097) 0:00:28.259 ********** 2025-04-14 00:51:38.057191 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-14 00:51:38.057205 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-14 00:51:38.057219 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-04-14 00:51:38.057233 | orchestrator | 2025-04-14 00:51:38.057247 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-04-14 00:51:38.057265 | orchestrator | Monday 14 April 2025 00:49:48 +0000 (0:00:01.582) 0:00:29.842 ********** 2025-04-14 00:51:38.057280 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-14 00:51:38.057294 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-14 00:51:38.057308 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-04-14 00:51:38.057322 | orchestrator | 2025-04-14 00:51:38.057336 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-04-14 00:51:38.057350 | orchestrator | Monday 14 April 2025 00:49:50 +0000 (0:00:02.099) 0:00:31.941 ********** 2025-04-14 00:51:38.057364 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:51:38.057378 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:51:38.057392 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:51:38.057405 | orchestrator | 2025-04-14 00:51:38.057419 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-04-14 00:51:38.057433 | orchestrator | Monday 14 April 2025 00:49:51 +0000 (0:00:00.790) 0:00:32.732 ********** 2025-04-14 00:51:38.057448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.057464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.057493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:51:38.057509 | orchestrator | 2025-04-14 00:51:38.057523 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-04-14 00:51:38.057537 | orchestrator | Monday 14 April 2025 00:49:53 +0000 (0:00:01.795) 0:00:34.527 ********** 2025-04-14 00:51:38.057551 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:51:38.057582 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:51:38.057597 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:51:38.057611 | orchestrator | 2025-04-14 00:51:38.057625 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-04-14 00:51:38.057639 | orchestrator | Monday 14 April 2025 00:49:54 +0000 (0:00:01.061) 0:00:35.589 ********** 2025-04-14 00:51:38.057653 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:51:38.057667 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:51:38.057681 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:51:38.057695 | orchestrator | 2025-04-14 00:51:38.057709 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-04-14 00:51:38.057723 | orchestrator | Monday 14 April 2025 00:50:00 +0000 (0:00:06.274) 0:00:41.864 ********** 2025-04-14 00:51:38.057737 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:51:38.057751 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:51:38.057765 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:51:38.057778 | orchestrator | 2025-04-14 00:51:38.057793 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-14 00:51:38.057806 | orchestrator | 2025-04-14 00:51:38.057820 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-14 00:51:38.057835 | orchestrator | Monday 14 April 2025 00:50:01 +0000 (0:00:00.402) 0:00:42.267 ********** 2025-04-14 00:51:38.057849 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:51:38.057863 | orchestrator | 2025-04-14 00:51:38.057877 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-14 00:51:38.057890 | orchestrator | Monday 14 April 2025 00:50:02 +0000 (0:00:00.943) 0:00:43.210 ********** 2025-04-14 00:51:38.057904 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:51:38.057924 | orchestrator | 2025-04-14 00:51:38.057938 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-14 00:51:38.057952 | orchestrator | Monday 14 April 2025 00:50:02 +0000 (0:00:00.269) 0:00:43.480 ********** 2025-04-14 00:51:38.057967 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:51:38.057980 | orchestrator | 2025-04-14 00:51:38.057994 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-14 00:51:38.058008 | orchestrator | Monday 14 April 2025 00:50:04 +0000 (0:00:01.799) 0:00:45.282 ********** 2025-04-14 00:51:38.058097 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:51:38.058112 | orchestrator | 2025-04-14 00:51:38.058126 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-14 00:51:38.058140 | orchestrator | 2025-04-14 00:51:38.058154 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-14 00:51:38.058168 | orchestrator | Monday 14 April 2025 00:50:58 +0000 (0:00:53.860) 0:01:39.143 ********** 2025-04-14 00:51:38.058182 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:51:38.058196 | orchestrator | 2025-04-14 00:51:38.058210 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-14 00:51:38.058224 | orchestrator | Monday 14 April 2025 00:50:58 +0000 (0:00:00.568) 0:01:39.712 ********** 2025-04-14 00:51:38.058238 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:51:38.058252 | orchestrator | 2025-04-14 00:51:38.058266 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-14 00:51:38.058280 | orchestrator | Monday 14 April 2025 00:50:59 +0000 (0:00:00.268) 0:01:39.980 ********** 2025-04-14 00:51:38.058293 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:51:38.058308 | orchestrator | 2025-04-14 00:51:38.058322 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-14 00:51:38.058336 | orchestrator | Monday 14 April 2025 00:51:05 +0000 (0:00:06.841) 0:01:46.822 ********** 2025-04-14 00:51:38.058349 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:51:38.058363 | orchestrator | 2025-04-14 00:51:38.058377 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-04-14 00:51:38.058391 | orchestrator | 2025-04-14 00:51:38.058405 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-04-14 00:51:38.058419 | orchestrator | Monday 14 April 2025 00:51:14 +0000 (0:00:09.082) 0:01:55.905 ********** 2025-04-14 00:51:38.058433 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:51:38.058447 | orchestrator | 2025-04-14 00:51:38.058467 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-04-14 00:51:38.058481 | orchestrator | Monday 14 April 2025 00:51:15 +0000 (0:00:00.615) 0:01:56.520 ********** 2025-04-14 00:51:38.058495 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:51:38.058514 | orchestrator | 2025-04-14 00:51:38.058529 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-04-14 00:51:38.058550 | orchestrator | Monday 14 April 2025 00:51:15 +0000 (0:00:00.239) 0:01:56.760 ********** 2025-04-14 00:51:41.097648 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:51:41.097793 | orchestrator | 2025-04-14 00:51:41.097814 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-04-14 00:51:41.097831 | orchestrator | Monday 14 April 2025 00:51:17 +0000 (0:00:01.779) 0:01:58.540 ********** 2025-04-14 00:51:41.097845 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:51:41.097860 | orchestrator | 2025-04-14 00:51:41.097877 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-04-14 00:51:41.097904 | orchestrator | 2025-04-14 00:51:41.097930 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-04-14 00:51:41.097955 | orchestrator | Monday 14 April 2025 00:51:31 +0000 (0:00:14.256) 0:02:12.796 ********** 2025-04-14 00:51:41.097980 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:51:41.098007 | orchestrator | 2025-04-14 00:51:41.098355 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-04-14 00:51:41.098440 | orchestrator | Monday 14 April 2025 00:51:32 +0000 (0:00:00.816) 0:02:13.612 ********** 2025-04-14 00:51:41.098468 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-14 00:51:41.098492 | orchestrator | enable_outward_rabbitmq_True 2025-04-14 00:51:41.098517 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-04-14 00:51:41.098541 | orchestrator | outward_rabbitmq_restart 2025-04-14 00:51:41.098593 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:51:41.098621 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:51:41.098647 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:51:41.098732 | orchestrator | 2025-04-14 00:51:41.098749 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-04-14 00:51:41.098764 | orchestrator | skipping: no hosts matched 2025-04-14 00:51:41.098778 | orchestrator | 2025-04-14 00:51:41.098792 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-04-14 00:51:41.098806 | orchestrator | skipping: no hosts matched 2025-04-14 00:51:41.098821 | orchestrator | 2025-04-14 00:51:41.098835 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-04-14 00:51:41.098849 | orchestrator | skipping: no hosts matched 2025-04-14 00:51:41.098863 | orchestrator | 2025-04-14 00:51:41.098877 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:51:41.098892 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-14 00:51:41.098908 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-14 00:51:41.098923 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:51:41.098937 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 00:51:41.098951 | orchestrator | 2025-04-14 00:51:41.098965 | orchestrator | 2025-04-14 00:51:41.098979 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:51:41.098993 | orchestrator | Monday 14 April 2025 00:51:35 +0000 (0:00:02.648) 0:02:16.261 ********** 2025-04-14 00:51:41.099007 | orchestrator | =============================================================================== 2025-04-14 00:51:41.099021 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.20s 2025-04-14 00:51:41.099035 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.42s 2025-04-14 00:51:41.099049 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.27s 2025-04-14 00:51:41.099063 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.63s 2025-04-14 00:51:41.099077 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.01s 2025-04-14 00:51:41.099104 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.78s 2025-04-14 00:51:41.099118 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.65s 2025-04-14 00:51:41.099132 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.13s 2025-04-14 00:51:41.099146 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.10s 2025-04-14 00:51:41.099159 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.10s 2025-04-14 00:51:41.099216 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.87s 2025-04-14 00:51:41.099235 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.80s 2025-04-14 00:51:41.099340 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.74s 2025-04-14 00:51:41.099390 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.72s 2025-04-14 00:51:41.099416 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.58s 2025-04-14 00:51:41.099454 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.31s 2025-04-14 00:51:41.099478 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 1.28s 2025-04-14 00:51:41.099503 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.13s 2025-04-14 00:51:41.099527 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.07s 2025-04-14 00:51:41.099550 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.06s 2025-04-14 00:51:41.099595 | orchestrator | 2025-04-14 00:51:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:41.099632 | orchestrator | 2025-04-14 00:51:41 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:41.099758 | orchestrator | 2025-04-14 00:51:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:41.099776 | orchestrator | 2025-04-14 00:51:41 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:41.099797 | orchestrator | 2025-04-14 00:51:41 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:44.144118 | orchestrator | 2025-04-14 00:51:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:44.144252 | orchestrator | 2025-04-14 00:51:44 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:44.145339 | orchestrator | 2025-04-14 00:51:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:44.145961 | orchestrator | 2025-04-14 00:51:44 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:44.146863 | orchestrator | 2025-04-14 00:51:44 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:47.200026 | orchestrator | 2025-04-14 00:51:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:47.200144 | orchestrator | 2025-04-14 00:51:47 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:50.246920 | orchestrator | 2025-04-14 00:51:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:50.247051 | orchestrator | 2025-04-14 00:51:47 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:50.247072 | orchestrator | 2025-04-14 00:51:47 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:50.247088 | orchestrator | 2025-04-14 00:51:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:50.247120 | orchestrator | 2025-04-14 00:51:50 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:50.249071 | orchestrator | 2025-04-14 00:51:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:50.251239 | orchestrator | 2025-04-14 00:51:50 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:50.255861 | orchestrator | 2025-04-14 00:51:50 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:53.312835 | orchestrator | 2025-04-14 00:51:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:53.312979 | orchestrator | 2025-04-14 00:51:53 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:53.313752 | orchestrator | 2025-04-14 00:51:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:53.316623 | orchestrator | 2025-04-14 00:51:53 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:56.371483 | orchestrator | 2025-04-14 00:51:53 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:56.371651 | orchestrator | 2025-04-14 00:51:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:56.371691 | orchestrator | 2025-04-14 00:51:56 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:56.373996 | orchestrator | 2025-04-14 00:51:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:56.375655 | orchestrator | 2025-04-14 00:51:56 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:56.377181 | orchestrator | 2025-04-14 00:51:56 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:56.377515 | orchestrator | 2025-04-14 00:51:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:51:59.434594 | orchestrator | 2025-04-14 00:51:59 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:51:59.436759 | orchestrator | 2025-04-14 00:51:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:51:59.440700 | orchestrator | 2025-04-14 00:51:59 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:51:59.442722 | orchestrator | 2025-04-14 00:51:59 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:51:59.445314 | orchestrator | 2025-04-14 00:51:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:02.492319 | orchestrator | 2025-04-14 00:52:02 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:02.493591 | orchestrator | 2025-04-14 00:52:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:02.493897 | orchestrator | 2025-04-14 00:52:02 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:02.495289 | orchestrator | 2025-04-14 00:52:02 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:05.544929 | orchestrator | 2025-04-14 00:52:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:05.545054 | orchestrator | 2025-04-14 00:52:05 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:05.548668 | orchestrator | 2025-04-14 00:52:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:05.550676 | orchestrator | 2025-04-14 00:52:05 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:05.550722 | orchestrator | 2025-04-14 00:52:05 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:08.600603 | orchestrator | 2025-04-14 00:52:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:08.600775 | orchestrator | 2025-04-14 00:52:08 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:08.602472 | orchestrator | 2025-04-14 00:52:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:08.604330 | orchestrator | 2025-04-14 00:52:08 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:08.606358 | orchestrator | 2025-04-14 00:52:08 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:08.606884 | orchestrator | 2025-04-14 00:52:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:11.671737 | orchestrator | 2025-04-14 00:52:11 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:11.671960 | orchestrator | 2025-04-14 00:52:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:11.672943 | orchestrator | 2025-04-14 00:52:11 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:11.673900 | orchestrator | 2025-04-14 00:52:11 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:14.718098 | orchestrator | 2025-04-14 00:52:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:14.718267 | orchestrator | 2025-04-14 00:52:14 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:14.720124 | orchestrator | 2025-04-14 00:52:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:14.720649 | orchestrator | 2025-04-14 00:52:14 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:14.720712 | orchestrator | 2025-04-14 00:52:14 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:17.765647 | orchestrator | 2025-04-14 00:52:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:17.765809 | orchestrator | 2025-04-14 00:52:17 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:17.766151 | orchestrator | 2025-04-14 00:52:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:17.766911 | orchestrator | 2025-04-14 00:52:17 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:17.768196 | orchestrator | 2025-04-14 00:52:17 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:20.821272 | orchestrator | 2025-04-14 00:52:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:20.821397 | orchestrator | 2025-04-14 00:52:20 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:20.822791 | orchestrator | 2025-04-14 00:52:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:20.824985 | orchestrator | 2025-04-14 00:52:20 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:20.826285 | orchestrator | 2025-04-14 00:52:20 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:20.826719 | orchestrator | 2025-04-14 00:52:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:23.872747 | orchestrator | 2025-04-14 00:52:23 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:23.873247 | orchestrator | 2025-04-14 00:52:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:23.874275 | orchestrator | 2025-04-14 00:52:23 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:23.875223 | orchestrator | 2025-04-14 00:52:23 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:23.875407 | orchestrator | 2025-04-14 00:52:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:26.919856 | orchestrator | 2025-04-14 00:52:26 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:26.920138 | orchestrator | 2025-04-14 00:52:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:26.921722 | orchestrator | 2025-04-14 00:52:26 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:26.922218 | orchestrator | 2025-04-14 00:52:26 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:26.922376 | orchestrator | 2025-04-14 00:52:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:29.961677 | orchestrator | 2025-04-14 00:52:29 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:29.965859 | orchestrator | 2025-04-14 00:52:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:29.967970 | orchestrator | 2025-04-14 00:52:29 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:29.968011 | orchestrator | 2025-04-14 00:52:29 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:33.025251 | orchestrator | 2025-04-14 00:52:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:33.025393 | orchestrator | 2025-04-14 00:52:33 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:33.025944 | orchestrator | 2025-04-14 00:52:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:33.026791 | orchestrator | 2025-04-14 00:52:33 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:33.027795 | orchestrator | 2025-04-14 00:52:33 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:36.073196 | orchestrator | 2025-04-14 00:52:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:36.073391 | orchestrator | 2025-04-14 00:52:36 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:36.073554 | orchestrator | 2025-04-14 00:52:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:36.074317 | orchestrator | 2025-04-14 00:52:36 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:36.078740 | orchestrator | 2025-04-14 00:52:36 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:39.136892 | orchestrator | 2025-04-14 00:52:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:39.137043 | orchestrator | 2025-04-14 00:52:39 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:39.137774 | orchestrator | 2025-04-14 00:52:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:39.139679 | orchestrator | 2025-04-14 00:52:39 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:39.140928 | orchestrator | 2025-04-14 00:52:39 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:42.191324 | orchestrator | 2025-04-14 00:52:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:42.191529 | orchestrator | 2025-04-14 00:52:42 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:42.191870 | orchestrator | 2025-04-14 00:52:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:42.191913 | orchestrator | 2025-04-14 00:52:42 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:42.193525 | orchestrator | 2025-04-14 00:52:42 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:45.228477 | orchestrator | 2025-04-14 00:52:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:45.228639 | orchestrator | 2025-04-14 00:52:45 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:45.230416 | orchestrator | 2025-04-14 00:52:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:45.230899 | orchestrator | 2025-04-14 00:52:45 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:45.231992 | orchestrator | 2025-04-14 00:52:45 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:48.272893 | orchestrator | 2025-04-14 00:52:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:48.273031 | orchestrator | 2025-04-14 00:52:48 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:48.278526 | orchestrator | 2025-04-14 00:52:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:48.284874 | orchestrator | 2025-04-14 00:52:48 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:51.355929 | orchestrator | 2025-04-14 00:52:48 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:51.356054 | orchestrator | 2025-04-14 00:52:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:51.356094 | orchestrator | 2025-04-14 00:52:51 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:51.356638 | orchestrator | 2025-04-14 00:52:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:51.357309 | orchestrator | 2025-04-14 00:52:51 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:51.358737 | orchestrator | 2025-04-14 00:52:51 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state STARTED 2025-04-14 00:52:54.396572 | orchestrator | 2025-04-14 00:52:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:54.396707 | orchestrator | 2025-04-14 00:52:54 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:54.397706 | orchestrator | 2025-04-14 00:52:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:54.398441 | orchestrator | 2025-04-14 00:52:54 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:52:54.400217 | orchestrator | 2025-04-14 00:52:54 | INFO  | Task 663f06fc-8b4f-44dd-8ab8-2c948fda2d6e is in state SUCCESS 2025-04-14 00:52:54.400731 | orchestrator | 2025-04-14 00:52:54.402125 | orchestrator | 2025-04-14 00:52:54.402294 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:52:54.402329 | orchestrator | 2025-04-14 00:52:54.402356 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 00:52:54.402383 | orchestrator | Monday 14 April 2025 00:50:20 +0000 (0:00:00.245) 0:00:00.245 ********** 2025-04-14 00:52:54.402584 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:52:54.402620 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:52:54.402647 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:52:54.402674 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.402700 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.402725 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.402749 | orchestrator | 2025-04-14 00:52:54.402775 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 00:52:54.402798 | orchestrator | Monday 14 April 2025 00:50:20 +0000 (0:00:00.799) 0:00:01.044 ********** 2025-04-14 00:52:54.402812 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-04-14 00:52:54.402827 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-04-14 00:52:54.402841 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-04-14 00:52:54.402855 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-04-14 00:52:54.402870 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-04-14 00:52:54.402883 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-04-14 00:52:54.402897 | orchestrator | 2025-04-14 00:52:54.402912 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-04-14 00:52:54.402926 | orchestrator | 2025-04-14 00:52:54.402940 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-04-14 00:52:54.402980 | orchestrator | Monday 14 April 2025 00:50:22 +0000 (0:00:02.051) 0:00:03.096 ********** 2025-04-14 00:52:54.402996 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:52:54.403012 | orchestrator | 2025-04-14 00:52:54.403026 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-04-14 00:52:54.403038 | orchestrator | Monday 14 April 2025 00:50:24 +0000 (0:00:01.901) 0:00:04.997 ********** 2025-04-14 00:52:54.403052 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403068 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403147 | orchestrator | 2025-04-14 00:52:54.403160 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-04-14 00:52:54.403173 | orchestrator | Monday 14 April 2025 00:50:25 +0000 (0:00:01.152) 0:00:06.149 ********** 2025-04-14 00:52:54.403198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403282 | orchestrator | 2025-04-14 00:52:54.403295 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-04-14 00:52:54.403307 | orchestrator | Monday 14 April 2025 00:50:28 +0000 (0:00:02.255) 0:00:08.405 ********** 2025-04-14 00:52:54.403319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403420 | orchestrator | 2025-04-14 00:52:54.403432 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-04-14 00:52:54.403445 | orchestrator | Monday 14 April 2025 00:50:29 +0000 (0:00:01.271) 0:00:09.676 ********** 2025-04-14 00:52:54.403457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403495 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403579 | orchestrator | 2025-04-14 00:52:54.403592 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-04-14 00:52:54.403604 | orchestrator | Monday 14 April 2025 00:50:32 +0000 (0:00:03.011) 0:00:12.688 ********** 2025-04-14 00:52:54.403617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.403693 | orchestrator | 2025-04-14 00:52:54.403705 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-04-14 00:52:54.403718 | orchestrator | Monday 14 April 2025 00:50:34 +0000 (0:00:01.775) 0:00:14.463 ********** 2025-04-14 00:52:54.403730 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:52:54.403743 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:52:54.403755 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:52:54.403768 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.403780 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:52:54.403792 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:52:54.403810 | orchestrator | 2025-04-14 00:52:54.403823 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-04-14 00:52:54.403835 | orchestrator | Monday 14 April 2025 00:50:37 +0000 (0:00:03.056) 0:00:17.520 ********** 2025-04-14 00:52:54.403848 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-04-14 00:52:54.403860 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-04-14 00:52:54.403873 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-04-14 00:52:54.403890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-04-14 00:52:54.403903 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-04-14 00:52:54.403916 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-04-14 00:52:54.403928 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-14 00:52:54.403941 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-14 00:52:54.403953 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-14 00:52:54.403971 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-14 00:52:54.403983 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-14 00:52:54.403996 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-04-14 00:52:54.404008 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-14 00:52:54.404022 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-14 00:52:54.404035 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-14 00:52:54.404048 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-14 00:52:54.404060 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-14 00:52:54.404073 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-04-14 00:52:54.404085 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-14 00:52:54.404099 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-14 00:52:54.404111 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-14 00:52:54.404123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-14 00:52:54.404136 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-14 00:52:54.404148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-04-14 00:52:54.404161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-14 00:52:54.404173 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-14 00:52:54.404185 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-14 00:52:54.404197 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-14 00:52:54.404215 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-14 00:52:54.404228 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-04-14 00:52:54.404241 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-14 00:52:54.404253 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-14 00:52:54.404266 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-14 00:52:54.404278 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-14 00:52:54.404291 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-14 00:52:54.404303 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-04-14 00:52:54.404315 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-14 00:52:54.404329 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-14 00:52:54.404342 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-04-14 00:52:54.404355 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-14 00:52:54.404373 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-14 00:52:54.404386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-04-14 00:52:54.404399 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-04-14 00:52:54.404411 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-04-14 00:52:54.404424 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-04-14 00:52:54.404436 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-04-14 00:52:54.404449 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-04-14 00:52:54.404461 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-14 00:52:54.404499 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-04-14 00:52:54.404514 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-14 00:52:54.404527 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-04-14 00:52:54.404540 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-14 00:52:54.404552 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-14 00:52:54.404565 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-04-14 00:52:54.404577 | orchestrator | 2025-04-14 00:52:54.404590 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-14 00:52:54.404602 | orchestrator | Monday 14 April 2025 00:50:56 +0000 (0:00:18.853) 0:00:36.373 ********** 2025-04-14 00:52:54.404621 | orchestrator | 2025-04-14 00:52:54.404633 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-14 00:52:54.404645 | orchestrator | Monday 14 April 2025 00:50:56 +0000 (0:00:00.053) 0:00:36.427 ********** 2025-04-14 00:52:54.404657 | orchestrator | 2025-04-14 00:52:54.404669 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-14 00:52:54.404682 | orchestrator | Monday 14 April 2025 00:50:56 +0000 (0:00:00.253) 0:00:36.680 ********** 2025-04-14 00:52:54.404694 | orchestrator | 2025-04-14 00:52:54.404706 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-14 00:52:54.404719 | orchestrator | Monday 14 April 2025 00:50:56 +0000 (0:00:00.066) 0:00:36.747 ********** 2025-04-14 00:52:54.404732 | orchestrator | 2025-04-14 00:52:54.404744 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-14 00:52:54.404756 | orchestrator | Monday 14 April 2025 00:50:56 +0000 (0:00:00.069) 0:00:36.816 ********** 2025-04-14 00:52:54.404768 | orchestrator | 2025-04-14 00:52:54.404781 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-04-14 00:52:54.404793 | orchestrator | Monday 14 April 2025 00:50:56 +0000 (0:00:00.056) 0:00:36.873 ********** 2025-04-14 00:52:54.404805 | orchestrator | 2025-04-14 00:52:54.404817 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-04-14 00:52:54.404830 | orchestrator | Monday 14 April 2025 00:50:57 +0000 (0:00:00.324) 0:00:37.198 ********** 2025-04-14 00:52:54.404842 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.404855 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:52:54.404867 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:52:54.404880 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.404892 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:52:54.404904 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.404916 | orchestrator | 2025-04-14 00:52:54.404928 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-04-14 00:52:54.404941 | orchestrator | Monday 14 April 2025 00:50:59 +0000 (0:00:02.453) 0:00:39.651 ********** 2025-04-14 00:52:54.404953 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.404966 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:52:54.404978 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:52:54.404990 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:52:54.405002 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:52:54.405015 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:52:54.405027 | orchestrator | 2025-04-14 00:52:54.405039 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-04-14 00:52:54.405052 | orchestrator | 2025-04-14 00:52:54.405064 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-14 00:52:54.405076 | orchestrator | Monday 14 April 2025 00:51:18 +0000 (0:00:18.727) 0:00:58.379 ********** 2025-04-14 00:52:54.405089 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:52:54.405101 | orchestrator | 2025-04-14 00:52:54.405113 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-14 00:52:54.405131 | orchestrator | Monday 14 April 2025 00:51:18 +0000 (0:00:00.686) 0:00:59.065 ********** 2025-04-14 00:52:54.405144 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:52:54.405156 | orchestrator | 2025-04-14 00:52:54.405175 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-04-14 00:52:54.405192 | orchestrator | Monday 14 April 2025 00:51:19 +0000 (0:00:00.858) 0:00:59.923 ********** 2025-04-14 00:52:54.405205 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.405217 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.405230 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.405242 | orchestrator | 2025-04-14 00:52:54.405254 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-04-14 00:52:54.405267 | orchestrator | Monday 14 April 2025 00:51:21 +0000 (0:00:01.312) 0:01:01.235 ********** 2025-04-14 00:52:54.405290 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.405303 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.405315 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.405327 | orchestrator | 2025-04-14 00:52:54.405340 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-04-14 00:52:54.405352 | orchestrator | Monday 14 April 2025 00:51:21 +0000 (0:00:00.656) 0:01:01.892 ********** 2025-04-14 00:52:54.405364 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.405377 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.405389 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.405401 | orchestrator | 2025-04-14 00:52:54.405413 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-04-14 00:52:54.405425 | orchestrator | Monday 14 April 2025 00:51:22 +0000 (0:00:00.850) 0:01:02.742 ********** 2025-04-14 00:52:54.405438 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.405450 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.405462 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.405520 | orchestrator | 2025-04-14 00:52:54.405542 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-04-14 00:52:54.405556 | orchestrator | Monday 14 April 2025 00:51:23 +0000 (0:00:00.714) 0:01:03.457 ********** 2025-04-14 00:52:54.405568 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.405580 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.405593 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.405605 | orchestrator | 2025-04-14 00:52:54.405618 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-04-14 00:52:54.405630 | orchestrator | Monday 14 April 2025 00:51:23 +0000 (0:00:00.575) 0:01:04.033 ********** 2025-04-14 00:52:54.405642 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.405654 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.405672 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.405685 | orchestrator | 2025-04-14 00:52:54.405698 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-04-14 00:52:54.405710 | orchestrator | Monday 14 April 2025 00:51:24 +0000 (0:00:00.676) 0:01:04.710 ********** 2025-04-14 00:52:54.405722 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.405735 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.405747 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.405759 | orchestrator | 2025-04-14 00:52:54.405772 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-04-14 00:52:54.405784 | orchestrator | Monday 14 April 2025 00:51:25 +0000 (0:00:00.682) 0:01:05.392 ********** 2025-04-14 00:52:54.405797 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.405809 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.405821 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.405834 | orchestrator | 2025-04-14 00:52:54.405846 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-04-14 00:52:54.405858 | orchestrator | Monday 14 April 2025 00:51:25 +0000 (0:00:00.557) 0:01:05.950 ********** 2025-04-14 00:52:54.405870 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.405883 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.405895 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.405907 | orchestrator | 2025-04-14 00:52:54.405920 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-04-14 00:52:54.405932 | orchestrator | Monday 14 April 2025 00:51:26 +0000 (0:00:00.317) 0:01:06.267 ********** 2025-04-14 00:52:54.405944 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.405957 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.405969 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.405981 | orchestrator | 2025-04-14 00:52:54.405993 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-04-14 00:52:54.406006 | orchestrator | Monday 14 April 2025 00:51:26 +0000 (0:00:00.489) 0:01:06.757 ********** 2025-04-14 00:52:54.406067 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.406105 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.406127 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.406150 | orchestrator | 2025-04-14 00:52:54.406172 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-04-14 00:52:54.406187 | orchestrator | Monday 14 April 2025 00:51:27 +0000 (0:00:00.501) 0:01:07.259 ********** 2025-04-14 00:52:54.406200 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.406212 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.406225 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.406238 | orchestrator | 2025-04-14 00:52:54.406250 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-04-14 00:52:54.406262 | orchestrator | Monday 14 April 2025 00:51:27 +0000 (0:00:00.607) 0:01:07.866 ********** 2025-04-14 00:52:54.406275 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.406287 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.406299 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.406312 | orchestrator | 2025-04-14 00:52:54.406324 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-04-14 00:52:54.406336 | orchestrator | Monday 14 April 2025 00:51:28 +0000 (0:00:00.502) 0:01:08.369 ********** 2025-04-14 00:52:54.406349 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.406361 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.406409 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.406424 | orchestrator | 2025-04-14 00:52:54.406437 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-04-14 00:52:54.406450 | orchestrator | Monday 14 April 2025 00:51:28 +0000 (0:00:00.701) 0:01:09.071 ********** 2025-04-14 00:52:54.406463 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.406499 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.406512 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.406525 | orchestrator | 2025-04-14 00:52:54.406545 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-04-14 00:52:54.406558 | orchestrator | Monday 14 April 2025 00:51:29 +0000 (0:00:00.771) 0:01:09.842 ********** 2025-04-14 00:52:54.406571 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.406583 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.406595 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.406608 | orchestrator | 2025-04-14 00:52:54.406629 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-04-14 00:52:54.406655 | orchestrator | Monday 14 April 2025 00:51:30 +0000 (0:00:00.529) 0:01:10.372 ********** 2025-04-14 00:52:54.406677 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.406698 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.406711 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.406724 | orchestrator | 2025-04-14 00:52:54.406736 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-04-14 00:52:54.406748 | orchestrator | Monday 14 April 2025 00:51:30 +0000 (0:00:00.411) 0:01:10.784 ********** 2025-04-14 00:52:54.406761 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:52:54.406773 | orchestrator | 2025-04-14 00:52:54.406785 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-04-14 00:52:54.406797 | orchestrator | Monday 14 April 2025 00:51:31 +0000 (0:00:01.134) 0:01:11.919 ********** 2025-04-14 00:52:54.406810 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.406822 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.406834 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.406847 | orchestrator | 2025-04-14 00:52:54.406859 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-04-14 00:52:54.406871 | orchestrator | Monday 14 April 2025 00:51:32 +0000 (0:00:00.905) 0:01:12.824 ********** 2025-04-14 00:52:54.406883 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.406896 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.406916 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.406929 | orchestrator | 2025-04-14 00:52:54.406942 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-04-14 00:52:54.406954 | orchestrator | Monday 14 April 2025 00:51:33 +0000 (0:00:00.832) 0:01:13.656 ********** 2025-04-14 00:52:54.406966 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.406979 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.406991 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.407003 | orchestrator | 2025-04-14 00:52:54.407016 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-04-14 00:52:54.407028 | orchestrator | Monday 14 April 2025 00:51:34 +0000 (0:00:00.771) 0:01:14.428 ********** 2025-04-14 00:52:54.407040 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.407053 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.407065 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.407077 | orchestrator | 2025-04-14 00:52:54.407089 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-04-14 00:52:54.407102 | orchestrator | Monday 14 April 2025 00:51:35 +0000 (0:00:01.136) 0:01:15.564 ********** 2025-04-14 00:52:54.407115 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.407127 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.407139 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.407151 | orchestrator | 2025-04-14 00:52:54.407163 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-04-14 00:52:54.407176 | orchestrator | Monday 14 April 2025 00:51:35 +0000 (0:00:00.572) 0:01:16.137 ********** 2025-04-14 00:52:54.407188 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.407200 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.407218 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.407230 | orchestrator | 2025-04-14 00:52:54.407242 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-04-14 00:52:54.407255 | orchestrator | Monday 14 April 2025 00:51:36 +0000 (0:00:00.608) 0:01:16.745 ********** 2025-04-14 00:52:54.407267 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.407279 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.407292 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.407304 | orchestrator | 2025-04-14 00:52:54.407316 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-04-14 00:52:54.407329 | orchestrator | Monday 14 April 2025 00:51:37 +0000 (0:00:00.834) 0:01:17.580 ********** 2025-04-14 00:52:54.407341 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.407353 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.407365 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.407377 | orchestrator | 2025-04-14 00:52:54.407390 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-14 00:52:54.407402 | orchestrator | Monday 14 April 2025 00:51:38 +0000 (0:00:00.643) 0:01:18.223 ********** 2025-04-14 00:52:54.407415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407579 | orchestrator | 2025-04-14 00:52:54.407591 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-14 00:52:54.407604 | orchestrator | Monday 14 April 2025 00:51:39 +0000 (0:00:01.604) 0:01:19.828 ********** 2025-04-14 00:52:54.407616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407748 | orchestrator | 2025-04-14 00:52:54.407761 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-14 00:52:54.407774 | orchestrator | Monday 14 April 2025 00:51:44 +0000 (0:00:04.541) 0:01:24.370 ********** 2025-04-14 00:52:54.407786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.407922 | orchestrator | 2025-04-14 00:52:54.407935 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-14 00:52:54.407948 | orchestrator | Monday 14 April 2025 00:51:46 +0000 (0:00:02.554) 0:01:26.924 ********** 2025-04-14 00:52:54.407960 | orchestrator | 2025-04-14 00:52:54.407973 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-14 00:52:54.407985 | orchestrator | Monday 14 April 2025 00:51:46 +0000 (0:00:00.066) 0:01:26.990 ********** 2025-04-14 00:52:54.407998 | orchestrator | 2025-04-14 00:52:54.408010 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-14 00:52:54.408023 | orchestrator | Monday 14 April 2025 00:51:46 +0000 (0:00:00.056) 0:01:27.047 ********** 2025-04-14 00:52:54.408035 | orchestrator | 2025-04-14 00:52:54.408047 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-14 00:52:54.408076 | orchestrator | Monday 14 April 2025 00:51:47 +0000 (0:00:00.270) 0:01:27.317 ********** 2025-04-14 00:52:54.408088 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.408107 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:52:54.408119 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:52:54.408132 | orchestrator | 2025-04-14 00:52:54.408144 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-14 00:52:54.408157 | orchestrator | Monday 14 April 2025 00:51:54 +0000 (0:00:07.356) 0:01:34.673 ********** 2025-04-14 00:52:54.408169 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:52:54.408181 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.408194 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:52:54.408206 | orchestrator | 2025-04-14 00:52:54.408219 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-14 00:52:54.408231 | orchestrator | Monday 14 April 2025 00:52:02 +0000 (0:00:07.733) 0:01:42.407 ********** 2025-04-14 00:52:54.408243 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.408256 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:52:54.408268 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:52:54.408280 | orchestrator | 2025-04-14 00:52:54.408293 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-14 00:52:54.408305 | orchestrator | Monday 14 April 2025 00:52:10 +0000 (0:00:07.850) 0:01:50.258 ********** 2025-04-14 00:52:54.408318 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.408330 | orchestrator | 2025-04-14 00:52:54.408343 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-14 00:52:54.408355 | orchestrator | Monday 14 April 2025 00:52:10 +0000 (0:00:00.136) 0:01:50.395 ********** 2025-04-14 00:52:54.408368 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.408380 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.408392 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.408405 | orchestrator | 2025-04-14 00:52:54.408423 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-14 00:52:54.408436 | orchestrator | Monday 14 April 2025 00:52:11 +0000 (0:00:01.448) 0:01:51.844 ********** 2025-04-14 00:52:54.408448 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.408460 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.408525 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.408540 | orchestrator | 2025-04-14 00:52:54.408553 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-14 00:52:54.408565 | orchestrator | Monday 14 April 2025 00:52:12 +0000 (0:00:00.653) 0:01:52.497 ********** 2025-04-14 00:52:54.408577 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.408590 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.408602 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.408615 | orchestrator | 2025-04-14 00:52:54.408627 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-14 00:52:54.408639 | orchestrator | Monday 14 April 2025 00:52:13 +0000 (0:00:00.941) 0:01:53.439 ********** 2025-04-14 00:52:54.408652 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.408664 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.408676 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.408688 | orchestrator | 2025-04-14 00:52:54.408700 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-14 00:52:54.408712 | orchestrator | Monday 14 April 2025 00:52:13 +0000 (0:00:00.651) 0:01:54.090 ********** 2025-04-14 00:52:54.408722 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.408732 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.408742 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.408752 | orchestrator | 2025-04-14 00:52:54.408763 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-14 00:52:54.408772 | orchestrator | Monday 14 April 2025 00:52:15 +0000 (0:00:01.127) 0:01:55.218 ********** 2025-04-14 00:52:54.408782 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.408792 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.408802 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.408812 | orchestrator | 2025-04-14 00:52:54.408822 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-04-14 00:52:54.408841 | orchestrator | Monday 14 April 2025 00:52:15 +0000 (0:00:00.737) 0:01:55.955 ********** 2025-04-14 00:52:54.408852 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.408862 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.408872 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.408882 | orchestrator | 2025-04-14 00:52:54.408892 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-04-14 00:52:54.408902 | orchestrator | Monday 14 April 2025 00:52:16 +0000 (0:00:00.521) 0:01:56.476 ********** 2025-04-14 00:52:54.408912 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.408922 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.408933 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.408943 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.408954 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.408964 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.408980 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.408991 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409001 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409016 | orchestrator | 2025-04-14 00:52:54.409027 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-04-14 00:52:54.409037 | orchestrator | Monday 14 April 2025 00:52:17 +0000 (0:00:01.544) 0:01:58.020 ********** 2025-04-14 00:52:54.409047 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409058 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409068 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409103 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409133 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409158 | orchestrator | 2025-04-14 00:52:54.409168 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-04-14 00:52:54.409179 | orchestrator | Monday 14 April 2025 00:52:22 +0000 (0:00:04.206) 0:02:02.227 ********** 2025-04-14 00:52:54.409189 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409199 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409210 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409220 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409234 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409244 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409258 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409274 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409289 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 00:52:54.409300 | orchestrator | 2025-04-14 00:52:54.409310 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-14 00:52:54.409321 | orchestrator | Monday 14 April 2025 00:52:25 +0000 (0:00:03.150) 0:02:05.378 ********** 2025-04-14 00:52:54.409331 | orchestrator | 2025-04-14 00:52:54.409341 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-14 00:52:54.409351 | orchestrator | Monday 14 April 2025 00:52:25 +0000 (0:00:00.232) 0:02:05.610 ********** 2025-04-14 00:52:54.409362 | orchestrator | 2025-04-14 00:52:54.409372 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-04-14 00:52:54.409382 | orchestrator | Monday 14 April 2025 00:52:25 +0000 (0:00:00.066) 0:02:05.677 ********** 2025-04-14 00:52:54.409392 | orchestrator | 2025-04-14 00:52:54.409402 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-04-14 00:52:54.409412 | orchestrator | Monday 14 April 2025 00:52:25 +0000 (0:00:00.062) 0:02:05.739 ********** 2025-04-14 00:52:54.409422 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:52:54.409432 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:52:54.409442 | orchestrator | 2025-04-14 00:52:54.409452 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-04-14 00:52:54.409463 | orchestrator | Monday 14 April 2025 00:52:32 +0000 (0:00:06.624) 0:02:12.363 ********** 2025-04-14 00:52:54.409507 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:52:54.409518 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:52:54.409528 | orchestrator | 2025-04-14 00:52:54.409539 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-04-14 00:52:54.409549 | orchestrator | Monday 14 April 2025 00:52:38 +0000 (0:00:06.363) 0:02:18.727 ********** 2025-04-14 00:52:54.409559 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:52:54.409569 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:52:54.409579 | orchestrator | 2025-04-14 00:52:54.409589 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-04-14 00:52:54.409600 | orchestrator | Monday 14 April 2025 00:52:45 +0000 (0:00:06.727) 0:02:25.455 ********** 2025-04-14 00:52:54.409611 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:52:54.409621 | orchestrator | 2025-04-14 00:52:54.409631 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-04-14 00:52:54.409641 | orchestrator | Monday 14 April 2025 00:52:45 +0000 (0:00:00.341) 0:02:25.796 ********** 2025-04-14 00:52:54.409651 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.409662 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.409672 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.409682 | orchestrator | 2025-04-14 00:52:54.409692 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-04-14 00:52:54.409702 | orchestrator | Monday 14 April 2025 00:52:46 +0000 (0:00:00.827) 0:02:26.623 ********** 2025-04-14 00:52:54.409712 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.409722 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.409732 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.409743 | orchestrator | 2025-04-14 00:52:54.409753 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-04-14 00:52:54.409763 | orchestrator | Monday 14 April 2025 00:52:47 +0000 (0:00:00.731) 0:02:27.354 ********** 2025-04-14 00:52:54.409773 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.409790 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.409801 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.409812 | orchestrator | 2025-04-14 00:52:54.409827 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-04-14 00:52:54.409838 | orchestrator | Monday 14 April 2025 00:52:48 +0000 (0:00:01.241) 0:02:28.595 ********** 2025-04-14 00:52:54.409848 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:52:54.409858 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:52:54.409868 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:52:54.409878 | orchestrator | 2025-04-14 00:52:54.409888 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-04-14 00:52:54.409899 | orchestrator | Monday 14 April 2025 00:52:49 +0000 (0:00:00.948) 0:02:29.544 ********** 2025-04-14 00:52:54.409909 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.409919 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.409929 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.409939 | orchestrator | 2025-04-14 00:52:54.409949 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-04-14 00:52:54.409959 | orchestrator | Monday 14 April 2025 00:52:50 +0000 (0:00:01.094) 0:02:30.638 ********** 2025-04-14 00:52:54.409969 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:52:54.409979 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:52:54.409989 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:52:54.409999 | orchestrator | 2025-04-14 00:52:54.410010 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:52:54.410041 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-04-14 00:52:54.410052 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-14 00:52:54.410066 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-04-14 00:52:57.448410 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:52:57.448609 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:52:57.448631 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 00:52:57.448646 | orchestrator | 2025-04-14 00:52:57.448661 | orchestrator | 2025-04-14 00:52:57.448676 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:52:57.448692 | orchestrator | Monday 14 April 2025 00:52:51 +0000 (0:00:01.369) 0:02:32.008 ********** 2025-04-14 00:52:57.448707 | orchestrator | =============================================================================== 2025-04-14 00:52:57.448721 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.85s 2025-04-14 00:52:57.448735 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 18.73s 2025-04-14 00:52:57.448749 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.58s 2025-04-14 00:52:57.448763 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.10s 2025-04-14 00:52:57.448777 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.98s 2025-04-14 00:52:57.448791 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.54s 2025-04-14 00:52:57.448816 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.21s 2025-04-14 00:52:57.448831 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.15s 2025-04-14 00:52:57.448845 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.06s 2025-04-14 00:52:57.448859 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.01s 2025-04-14 00:52:57.448873 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.55s 2025-04-14 00:52:57.448911 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.45s 2025-04-14 00:52:57.448928 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.26s 2025-04-14 00:52:57.448944 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.05s 2025-04-14 00:52:57.448959 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.90s 2025-04-14 00:52:57.448974 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.78s 2025-04-14 00:52:57.448990 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.60s 2025-04-14 00:52:57.449006 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.54s 2025-04-14 00:52:57.449021 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.45s 2025-04-14 00:52:57.449036 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.37s 2025-04-14 00:52:57.449053 | orchestrator | 2025-04-14 00:52:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:52:57.449087 | orchestrator | 2025-04-14 00:52:57 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:52:57.449641 | orchestrator | 2025-04-14 00:52:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:52:57.452499 | orchestrator | 2025-04-14 00:52:57 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:00.503671 | orchestrator | 2025-04-14 00:52:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:00.503813 | orchestrator | 2025-04-14 00:53:00 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:00.504735 | orchestrator | 2025-04-14 00:53:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:00.507416 | orchestrator | 2025-04-14 00:53:00 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:00.508882 | orchestrator | 2025-04-14 00:53:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:03.560434 | orchestrator | 2025-04-14 00:53:03 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:06.608075 | orchestrator | 2025-04-14 00:53:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:06.608158 | orchestrator | 2025-04-14 00:53:03 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:06.608167 | orchestrator | 2025-04-14 00:53:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:06.608184 | orchestrator | 2025-04-14 00:53:06 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:06.611616 | orchestrator | 2025-04-14 00:53:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:06.612520 | orchestrator | 2025-04-14 00:53:06 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:09.663609 | orchestrator | 2025-04-14 00:53:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:09.663710 | orchestrator | 2025-04-14 00:53:09 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:09.666516 | orchestrator | 2025-04-14 00:53:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:09.668311 | orchestrator | 2025-04-14 00:53:09 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:09.668618 | orchestrator | 2025-04-14 00:53:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:12.706978 | orchestrator | 2025-04-14 00:53:12 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:12.708028 | orchestrator | 2025-04-14 00:53:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:12.708623 | orchestrator | 2025-04-14 00:53:12 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:12.709045 | orchestrator | 2025-04-14 00:53:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:15.758506 | orchestrator | 2025-04-14 00:53:15 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:15.759978 | orchestrator | 2025-04-14 00:53:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:15.761854 | orchestrator | 2025-04-14 00:53:15 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:18.816098 | orchestrator | 2025-04-14 00:53:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:18.816237 | orchestrator | 2025-04-14 00:53:18 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:18.818295 | orchestrator | 2025-04-14 00:53:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:18.818949 | orchestrator | 2025-04-14 00:53:18 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:18.819058 | orchestrator | 2025-04-14 00:53:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:21.877288 | orchestrator | 2025-04-14 00:53:21 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:21.879476 | orchestrator | 2025-04-14 00:53:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:21.882514 | orchestrator | 2025-04-14 00:53:21 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:21.882878 | orchestrator | 2025-04-14 00:53:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:24.926319 | orchestrator | 2025-04-14 00:53:24 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:24.927548 | orchestrator | 2025-04-14 00:53:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:24.929250 | orchestrator | 2025-04-14 00:53:24 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:27.989889 | orchestrator | 2025-04-14 00:53:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:27.990115 | orchestrator | 2025-04-14 00:53:27 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:27.990683 | orchestrator | 2025-04-14 00:53:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:27.993237 | orchestrator | 2025-04-14 00:53:27 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:27.993564 | orchestrator | 2025-04-14 00:53:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:31.059998 | orchestrator | 2025-04-14 00:53:31 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:31.060716 | orchestrator | 2025-04-14 00:53:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:31.061628 | orchestrator | 2025-04-14 00:53:31 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:34.114630 | orchestrator | 2025-04-14 00:53:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:34.114788 | orchestrator | 2025-04-14 00:53:34 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:34.117521 | orchestrator | 2025-04-14 00:53:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:34.119373 | orchestrator | 2025-04-14 00:53:34 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:34.119658 | orchestrator | 2025-04-14 00:53:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:37.167627 | orchestrator | 2025-04-14 00:53:37 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:37.170702 | orchestrator | 2025-04-14 00:53:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:37.172808 | orchestrator | 2025-04-14 00:53:37 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:40.217518 | orchestrator | 2025-04-14 00:53:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:40.217672 | orchestrator | 2025-04-14 00:53:40 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:40.218605 | orchestrator | 2025-04-14 00:53:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:40.220196 | orchestrator | 2025-04-14 00:53:40 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:40.220499 | orchestrator | 2025-04-14 00:53:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:43.259606 | orchestrator | 2025-04-14 00:53:43 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:43.260736 | orchestrator | 2025-04-14 00:53:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:43.275574 | orchestrator | 2025-04-14 00:53:43 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:43.276042 | orchestrator | 2025-04-14 00:53:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:46.331843 | orchestrator | 2025-04-14 00:53:46 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:46.333378 | orchestrator | 2025-04-14 00:53:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:46.335051 | orchestrator | 2025-04-14 00:53:46 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:49.391877 | orchestrator | 2025-04-14 00:53:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:49.392034 | orchestrator | 2025-04-14 00:53:49 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:49.393591 | orchestrator | 2025-04-14 00:53:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:49.393630 | orchestrator | 2025-04-14 00:53:49 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:52.442568 | orchestrator | 2025-04-14 00:53:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:52.442728 | orchestrator | 2025-04-14 00:53:52 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:52.443540 | orchestrator | 2025-04-14 00:53:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:52.445456 | orchestrator | 2025-04-14 00:53:52 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:55.501937 | orchestrator | 2025-04-14 00:53:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:55.502134 | orchestrator | 2025-04-14 00:53:55 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:55.503737 | orchestrator | 2025-04-14 00:53:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:55.503808 | orchestrator | 2025-04-14 00:53:55 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:58.544907 | orchestrator | 2025-04-14 00:53:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:53:58.545039 | orchestrator | 2025-04-14 00:53:58 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:53:58.548290 | orchestrator | 2025-04-14 00:53:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:53:58.550755 | orchestrator | 2025-04-14 00:53:58 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:53:58.551236 | orchestrator | 2025-04-14 00:53:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:01.599688 | orchestrator | 2025-04-14 00:54:01 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:01.600587 | orchestrator | 2025-04-14 00:54:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:01.600977 | orchestrator | 2025-04-14 00:54:01 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:01.601567 | orchestrator | 2025-04-14 00:54:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:04.648914 | orchestrator | 2025-04-14 00:54:04 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:04.650104 | orchestrator | 2025-04-14 00:54:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:04.652044 | orchestrator | 2025-04-14 00:54:04 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:04.652151 | orchestrator | 2025-04-14 00:54:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:07.709814 | orchestrator | 2025-04-14 00:54:07 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:07.714008 | orchestrator | 2025-04-14 00:54:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:07.714122 | orchestrator | 2025-04-14 00:54:07 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:10.754628 | orchestrator | 2025-04-14 00:54:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:10.754814 | orchestrator | 2025-04-14 00:54:10 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:10.760483 | orchestrator | 2025-04-14 00:54:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:13.806775 | orchestrator | 2025-04-14 00:54:10 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:13.806906 | orchestrator | 2025-04-14 00:54:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:13.806944 | orchestrator | 2025-04-14 00:54:13 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:13.807999 | orchestrator | 2025-04-14 00:54:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:13.809608 | orchestrator | 2025-04-14 00:54:13 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:13.809804 | orchestrator | 2025-04-14 00:54:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:16.864257 | orchestrator | 2025-04-14 00:54:16 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:16.866688 | orchestrator | 2025-04-14 00:54:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:19.913786 | orchestrator | 2025-04-14 00:54:16 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:19.913948 | orchestrator | 2025-04-14 00:54:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:19.914003 | orchestrator | 2025-04-14 00:54:19 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:19.914739 | orchestrator | 2025-04-14 00:54:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:19.917333 | orchestrator | 2025-04-14 00:54:19 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:22.960628 | orchestrator | 2025-04-14 00:54:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:22.960775 | orchestrator | 2025-04-14 00:54:22 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:22.963557 | orchestrator | 2025-04-14 00:54:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:22.966239 | orchestrator | 2025-04-14 00:54:22 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:26.019450 | orchestrator | 2025-04-14 00:54:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:26.019640 | orchestrator | 2025-04-14 00:54:26 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:26.019735 | orchestrator | 2025-04-14 00:54:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:26.020137 | orchestrator | 2025-04-14 00:54:26 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:26.020672 | orchestrator | 2025-04-14 00:54:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:29.077837 | orchestrator | 2025-04-14 00:54:29 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:29.079903 | orchestrator | 2025-04-14 00:54:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:29.081663 | orchestrator | 2025-04-14 00:54:29 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:29.084124 | orchestrator | 2025-04-14 00:54:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:32.130925 | orchestrator | 2025-04-14 00:54:32 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:32.133007 | orchestrator | 2025-04-14 00:54:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:32.135893 | orchestrator | 2025-04-14 00:54:32 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:35.185477 | orchestrator | 2025-04-14 00:54:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:35.185648 | orchestrator | 2025-04-14 00:54:35 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:35.187110 | orchestrator | 2025-04-14 00:54:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:35.188060 | orchestrator | 2025-04-14 00:54:35 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:38.244822 | orchestrator | 2025-04-14 00:54:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:38.244957 | orchestrator | 2025-04-14 00:54:38 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:41.305586 | orchestrator | 2025-04-14 00:54:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:41.305744 | orchestrator | 2025-04-14 00:54:38 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:41.305762 | orchestrator | 2025-04-14 00:54:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:41.305816 | orchestrator | 2025-04-14 00:54:41 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:41.305875 | orchestrator | 2025-04-14 00:54:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:41.306961 | orchestrator | 2025-04-14 00:54:41 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:41.307092 | orchestrator | 2025-04-14 00:54:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:44.357553 | orchestrator | 2025-04-14 00:54:44 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:44.359237 | orchestrator | 2025-04-14 00:54:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:44.360967 | orchestrator | 2025-04-14 00:54:44 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:44.361253 | orchestrator | 2025-04-14 00:54:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:47.422329 | orchestrator | 2025-04-14 00:54:47 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:47.424417 | orchestrator | 2025-04-14 00:54:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:47.426398 | orchestrator | 2025-04-14 00:54:47 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:50.476844 | orchestrator | 2025-04-14 00:54:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:50.477006 | orchestrator | 2025-04-14 00:54:50 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:50.477452 | orchestrator | 2025-04-14 00:54:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:50.480745 | orchestrator | 2025-04-14 00:54:50 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:50.481191 | orchestrator | 2025-04-14 00:54:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:53.536935 | orchestrator | 2025-04-14 00:54:53 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:53.539555 | orchestrator | 2025-04-14 00:54:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:53.542204 | orchestrator | 2025-04-14 00:54:53 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:53.542602 | orchestrator | 2025-04-14 00:54:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:56.592098 | orchestrator | 2025-04-14 00:54:56 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:56.592500 | orchestrator | 2025-04-14 00:54:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:56.593743 | orchestrator | 2025-04-14 00:54:56 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:56.594844 | orchestrator | 2025-04-14 00:54:56 | INFO  | Task 9b24bcd8-4a47-4bfc-abaf-567f68c39e87 is in state STARTED 2025-04-14 00:54:59.642755 | orchestrator | 2025-04-14 00:54:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:54:59.642904 | orchestrator | 2025-04-14 00:54:59 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:54:59.649356 | orchestrator | 2025-04-14 00:54:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:54:59.650294 | orchestrator | 2025-04-14 00:54:59 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:54:59.651314 | orchestrator | 2025-04-14 00:54:59 | INFO  | Task 9b24bcd8-4a47-4bfc-abaf-567f68c39e87 is in state STARTED 2025-04-14 00:55:02.703697 | orchestrator | 2025-04-14 00:54:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:02.703834 | orchestrator | 2025-04-14 00:55:02 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:02.705213 | orchestrator | 2025-04-14 00:55:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:02.707767 | orchestrator | 2025-04-14 00:55:02 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:02.709987 | orchestrator | 2025-04-14 00:55:02 | INFO  | Task 9b24bcd8-4a47-4bfc-abaf-567f68c39e87 is in state STARTED 2025-04-14 00:55:02.710179 | orchestrator | 2025-04-14 00:55:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:05.779628 | orchestrator | 2025-04-14 00:55:05 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:05.781664 | orchestrator | 2025-04-14 00:55:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:05.784179 | orchestrator | 2025-04-14 00:55:05 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:05.785711 | orchestrator | 2025-04-14 00:55:05 | INFO  | Task 9b24bcd8-4a47-4bfc-abaf-567f68c39e87 is in state STARTED 2025-04-14 00:55:08.838424 | orchestrator | 2025-04-14 00:55:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:08.838569 | orchestrator | 2025-04-14 00:55:08 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:08.840266 | orchestrator | 2025-04-14 00:55:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:08.842605 | orchestrator | 2025-04-14 00:55:08 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:08.843490 | orchestrator | 2025-04-14 00:55:08 | INFO  | Task 9b24bcd8-4a47-4bfc-abaf-567f68c39e87 is in state SUCCESS 2025-04-14 00:55:08.843592 | orchestrator | 2025-04-14 00:55:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:11.903970 | orchestrator | 2025-04-14 00:55:11 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:14.961751 | orchestrator | 2025-04-14 00:55:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:14.961876 | orchestrator | 2025-04-14 00:55:11 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:14.961897 | orchestrator | 2025-04-14 00:55:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:14.961930 | orchestrator | 2025-04-14 00:55:14 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:14.965121 | orchestrator | 2025-04-14 00:55:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:14.966648 | orchestrator | 2025-04-14 00:55:14 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:18.020131 | orchestrator | 2025-04-14 00:55:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:18.020400 | orchestrator | 2025-04-14 00:55:18 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:18.020577 | orchestrator | 2025-04-14 00:55:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:18.021986 | orchestrator | 2025-04-14 00:55:18 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:21.087095 | orchestrator | 2025-04-14 00:55:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:21.087264 | orchestrator | 2025-04-14 00:55:21 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:21.088849 | orchestrator | 2025-04-14 00:55:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:21.090455 | orchestrator | 2025-04-14 00:55:21 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:24.134383 | orchestrator | 2025-04-14 00:55:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:24.134527 | orchestrator | 2025-04-14 00:55:24 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:24.136254 | orchestrator | 2025-04-14 00:55:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:24.138677 | orchestrator | 2025-04-14 00:55:24 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:27.201088 | orchestrator | 2025-04-14 00:55:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:27.201225 | orchestrator | 2025-04-14 00:55:27 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:27.201750 | orchestrator | 2025-04-14 00:55:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:27.202368 | orchestrator | 2025-04-14 00:55:27 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:30.265765 | orchestrator | 2025-04-14 00:55:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:30.265914 | orchestrator | 2025-04-14 00:55:30 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:30.269405 | orchestrator | 2025-04-14 00:55:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:30.270440 | orchestrator | 2025-04-14 00:55:30 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:30.270617 | orchestrator | 2025-04-14 00:55:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:33.322790 | orchestrator | 2025-04-14 00:55:33 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:33.324100 | orchestrator | 2025-04-14 00:55:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:33.326683 | orchestrator | 2025-04-14 00:55:33 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:36.388434 | orchestrator | 2025-04-14 00:55:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:36.388664 | orchestrator | 2025-04-14 00:55:36 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:36.388847 | orchestrator | 2025-04-14 00:55:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:36.388893 | orchestrator | 2025-04-14 00:55:36 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:39.426739 | orchestrator | 2025-04-14 00:55:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:39.426877 | orchestrator | 2025-04-14 00:55:39 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:39.428466 | orchestrator | 2025-04-14 00:55:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:39.430005 | orchestrator | 2025-04-14 00:55:39 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:42.487741 | orchestrator | 2025-04-14 00:55:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:42.487906 | orchestrator | 2025-04-14 00:55:42 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:42.491409 | orchestrator | 2025-04-14 00:55:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:42.493513 | orchestrator | 2025-04-14 00:55:42 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:45.548172 | orchestrator | 2025-04-14 00:55:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:45.548378 | orchestrator | 2025-04-14 00:55:45 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:45.553786 | orchestrator | 2025-04-14 00:55:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:45.553826 | orchestrator | 2025-04-14 00:55:45 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:48.623626 | orchestrator | 2025-04-14 00:55:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:48.623796 | orchestrator | 2025-04-14 00:55:48 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:48.623871 | orchestrator | 2025-04-14 00:55:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:48.624903 | orchestrator | 2025-04-14 00:55:48 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:51.677261 | orchestrator | 2025-04-14 00:55:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:51.677475 | orchestrator | 2025-04-14 00:55:51 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:51.679852 | orchestrator | 2025-04-14 00:55:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:51.681534 | orchestrator | 2025-04-14 00:55:51 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:51.681786 | orchestrator | 2025-04-14 00:55:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:54.719980 | orchestrator | 2025-04-14 00:55:54 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:54.721685 | orchestrator | 2025-04-14 00:55:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:54.724229 | orchestrator | 2025-04-14 00:55:54 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:55:57.778672 | orchestrator | 2025-04-14 00:55:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:55:57.778809 | orchestrator | 2025-04-14 00:55:57 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:55:57.779163 | orchestrator | 2025-04-14 00:55:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:55:57.781782 | orchestrator | 2025-04-14 00:55:57 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:00.841464 | orchestrator | 2025-04-14 00:55:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:00.841612 | orchestrator | 2025-04-14 00:56:00 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:00.842655 | orchestrator | 2025-04-14 00:56:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:00.843828 | orchestrator | 2025-04-14 00:56:00 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:03.907195 | orchestrator | 2025-04-14 00:56:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:03.907362 | orchestrator | 2025-04-14 00:56:03 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:03.907818 | orchestrator | 2025-04-14 00:56:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:03.908793 | orchestrator | 2025-04-14 00:56:03 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:03.908878 | orchestrator | 2025-04-14 00:56:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:06.960785 | orchestrator | 2025-04-14 00:56:06 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:06.961677 | orchestrator | 2025-04-14 00:56:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:06.963798 | orchestrator | 2025-04-14 00:56:06 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:10.002010 | orchestrator | 2025-04-14 00:56:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:10.002213 | orchestrator | 2025-04-14 00:56:10 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:10.002550 | orchestrator | 2025-04-14 00:56:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:10.004542 | orchestrator | 2025-04-14 00:56:10 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:13.050321 | orchestrator | 2025-04-14 00:56:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:13.050472 | orchestrator | 2025-04-14 00:56:13 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:13.053667 | orchestrator | 2025-04-14 00:56:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:13.055866 | orchestrator | 2025-04-14 00:56:13 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:13.058697 | orchestrator | 2025-04-14 00:56:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:16.108669 | orchestrator | 2025-04-14 00:56:16 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:16.108908 | orchestrator | 2025-04-14 00:56:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:16.111008 | orchestrator | 2025-04-14 00:56:16 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:16.111121 | orchestrator | 2025-04-14 00:56:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:19.151680 | orchestrator | 2025-04-14 00:56:19 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:19.157553 | orchestrator | 2025-04-14 00:56:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:22.212852 | orchestrator | 2025-04-14 00:56:19 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:22.212971 | orchestrator | 2025-04-14 00:56:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:22.213008 | orchestrator | 2025-04-14 00:56:22 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:22.213698 | orchestrator | 2025-04-14 00:56:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:22.214640 | orchestrator | 2025-04-14 00:56:22 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:25.265746 | orchestrator | 2025-04-14 00:56:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:25.265899 | orchestrator | 2025-04-14 00:56:25 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:25.267636 | orchestrator | 2025-04-14 00:56:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:25.269575 | orchestrator | 2025-04-14 00:56:25 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:28.325682 | orchestrator | 2025-04-14 00:56:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:28.325819 | orchestrator | 2025-04-14 00:56:28 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:28.328005 | orchestrator | 2025-04-14 00:56:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:28.329735 | orchestrator | 2025-04-14 00:56:28 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:28.329842 | orchestrator | 2025-04-14 00:56:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:31.383822 | orchestrator | 2025-04-14 00:56:31 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:31.386152 | orchestrator | 2025-04-14 00:56:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:31.389563 | orchestrator | 2025-04-14 00:56:31 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:34.449094 | orchestrator | 2025-04-14 00:56:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:34.449233 | orchestrator | 2025-04-14 00:56:34 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:34.453379 | orchestrator | 2025-04-14 00:56:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:34.454895 | orchestrator | 2025-04-14 00:56:34 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:34.455308 | orchestrator | 2025-04-14 00:56:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:37.491529 | orchestrator | 2025-04-14 00:56:37 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:37.492224 | orchestrator | 2025-04-14 00:56:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:37.494069 | orchestrator | 2025-04-14 00:56:37 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:37.494321 | orchestrator | 2025-04-14 00:56:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:40.533984 | orchestrator | 2025-04-14 00:56:40 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:40.534808 | orchestrator | 2025-04-14 00:56:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:40.536521 | orchestrator | 2025-04-14 00:56:40 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:40.536595 | orchestrator | 2025-04-14 00:56:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:43.580431 | orchestrator | 2025-04-14 00:56:43 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:43.587466 | orchestrator | 2025-04-14 00:56:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:46.633056 | orchestrator | 2025-04-14 00:56:43 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:46.633178 | orchestrator | 2025-04-14 00:56:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:46.633212 | orchestrator | 2025-04-14 00:56:46 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:46.635523 | orchestrator | 2025-04-14 00:56:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:46.637591 | orchestrator | 2025-04-14 00:56:46 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state STARTED 2025-04-14 00:56:46.637861 | orchestrator | 2025-04-14 00:56:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:49.699910 | orchestrator | 2025-04-14 00:56:49 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:56:49.700418 | orchestrator | 2025-04-14 00:56:49 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:49.704296 | orchestrator | 2025-04-14 00:56:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:49.724262 | orchestrator | 2025-04-14 00:56:49.724356 | orchestrator | None 2025-04-14 00:56:49.724376 | orchestrator | 2025-04-14 00:56:49.724392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:56:49.724409 | orchestrator | 2025-04-14 00:56:49.724425 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 00:56:49.724441 | orchestrator | Monday 14 April 2025 00:48:53 +0000 (0:00:00.801) 0:00:00.801 ********** 2025-04-14 00:56:49.724457 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.724474 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.724490 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.724506 | orchestrator | 2025-04-14 00:56:49.724522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 00:56:49.724538 | orchestrator | Monday 14 April 2025 00:48:54 +0000 (0:00:00.832) 0:00:01.633 ********** 2025-04-14 00:56:49.724554 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-04-14 00:56:49.724570 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-04-14 00:56:49.724585 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-04-14 00:56:49.724601 | orchestrator | 2025-04-14 00:56:49.724616 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-04-14 00:56:49.724631 | orchestrator | 2025-04-14 00:56:49.724646 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-14 00:56:49.724662 | orchestrator | Monday 14 April 2025 00:48:55 +0000 (0:00:00.411) 0:00:02.044 ********** 2025-04-14 00:56:49.724678 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.724693 | orchestrator | 2025-04-14 00:56:49.724709 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-04-14 00:56:49.724724 | orchestrator | Monday 14 April 2025 00:48:56 +0000 (0:00:01.090) 0:00:03.134 ********** 2025-04-14 00:56:49.724738 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.724752 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.724766 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.724780 | orchestrator | 2025-04-14 00:56:49.724794 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-14 00:56:49.724848 | orchestrator | Monday 14 April 2025 00:48:57 +0000 (0:00:01.465) 0:00:04.599 ********** 2025-04-14 00:56:49.724865 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.725174 | orchestrator | 2025-04-14 00:56:49.725209 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-04-14 00:56:49.725275 | orchestrator | Monday 14 April 2025 00:48:58 +0000 (0:00:00.960) 0:00:05.560 ********** 2025-04-14 00:56:49.725296 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.725311 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.725325 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.725339 | orchestrator | 2025-04-14 00:56:49.725353 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-04-14 00:56:49.725367 | orchestrator | Monday 14 April 2025 00:49:00 +0000 (0:00:01.366) 0:00:06.927 ********** 2025-04-14 00:56:49.725381 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-14 00:56:49.725395 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-14 00:56:49.725432 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-04-14 00:56:49.725447 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-14 00:56:49.725461 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-14 00:56:49.725474 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-04-14 00:56:49.725488 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-14 00:56:49.725504 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-14 00:56:49.725523 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-14 00:56:49.725539 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-14 00:56:49.725561 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-04-14 00:56:49.725584 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-04-14 00:56:49.725606 | orchestrator | 2025-04-14 00:56:49.725632 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-14 00:56:49.725652 | orchestrator | Monday 14 April 2025 00:49:04 +0000 (0:00:04.757) 0:00:11.684 ********** 2025-04-14 00:56:49.726472 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-14 00:56:49.726546 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-14 00:56:49.726563 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-14 00:56:49.726579 | orchestrator | 2025-04-14 00:56:49.726595 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-14 00:56:49.726610 | orchestrator | Monday 14 April 2025 00:49:06 +0000 (0:00:02.063) 0:00:13.748 ********** 2025-04-14 00:56:49.726626 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-04-14 00:56:49.726641 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-04-14 00:56:49.726656 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-04-14 00:56:49.726671 | orchestrator | 2025-04-14 00:56:49.726687 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-14 00:56:49.726702 | orchestrator | Monday 14 April 2025 00:49:09 +0000 (0:00:02.136) 0:00:15.885 ********** 2025-04-14 00:56:49.726717 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-04-14 00:56:49.726733 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.726764 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-04-14 00:56:49.726779 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.726793 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-04-14 00:56:49.726807 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.726821 | orchestrator | 2025-04-14 00:56:49.726835 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-04-14 00:56:49.726849 | orchestrator | Monday 14 April 2025 00:49:09 +0000 (0:00:00.665) 0:00:16.550 ********** 2025-04-14 00:56:49.726866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.726886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.726919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.726960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.726986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.727514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.727547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.727562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.727592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.727607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.727623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.727638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.727693 | orchestrator | 2025-04-14 00:56:49.727745 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-04-14 00:56:49.727760 | orchestrator | Monday 14 April 2025 00:49:12 +0000 (0:00:02.523) 0:00:19.074 ********** 2025-04-14 00:56:49.727774 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.727789 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.727803 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.727817 | orchestrator | 2025-04-14 00:56:49.727874 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-04-14 00:56:49.727912 | orchestrator | Monday 14 April 2025 00:49:13 +0000 (0:00:01.561) 0:00:20.635 ********** 2025-04-14 00:56:49.728310 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-04-14 00:56:49.728345 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-04-14 00:56:49.728366 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-04-14 00:56:49.728383 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-04-14 00:56:49.728398 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-04-14 00:56:49.728424 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-04-14 00:56:49.728438 | orchestrator | 2025-04-14 00:56:49.728452 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-04-14 00:56:49.728466 | orchestrator | Monday 14 April 2025 00:49:17 +0000 (0:00:03.961) 0:00:24.597 ********** 2025-04-14 00:56:49.728480 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.728494 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.728508 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.728522 | orchestrator | 2025-04-14 00:56:49.728536 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-04-14 00:56:49.728550 | orchestrator | Monday 14 April 2025 00:49:20 +0000 (0:00:02.707) 0:00:27.305 ********** 2025-04-14 00:56:49.728575 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.728590 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.728608 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.728631 | orchestrator | 2025-04-14 00:56:49.729111 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-04-14 00:56:49.729149 | orchestrator | Monday 14 April 2025 00:49:22 +0000 (0:00:02.130) 0:00:29.436 ********** 2025-04-14 00:56:49.729171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.729222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.729353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.729373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.729399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.729426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.729439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.729453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.729466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.729479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.729492 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.729505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.729524 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.729554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.729568 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.729581 | orchestrator | 2025-04-14 00:56:49.729594 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-04-14 00:56:49.729606 | orchestrator | Monday 14 April 2025 00:49:25 +0000 (0:00:02.964) 0:00:32.400 ********** 2025-04-14 00:56:49.729619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.729632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.730309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.730349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.730362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.730375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.730404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.730418 | orchestrator | 2025-04-14 00:56:49.730430 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-04-14 00:56:49.730443 | orchestrator | Monday 14 April 2025 00:49:31 +0000 (0:00:06.134) 0:00:38.535 ********** 2025-04-14 00:56:49.730456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/pro2025-04-14 00:56:49 | INFO  | Task 9f7b873f-c3b3-4d63-94d3-fcf574c33959 is in state SUCCESS 2025-04-14 00:56:49.730567 | orchestrator | xysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.730582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.730595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.730613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.730626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.730645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.730659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.730671 | orchestrator | 2025-04-14 00:56:49.730690 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-04-14 00:56:49.730703 | orchestrator | Monday 14 April 2025 00:49:35 +0000 (0:00:03.344) 0:00:41.879 ********** 2025-04-14 00:56:49.730715 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-14 00:56:49.730729 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-14 00:56:49.730742 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-04-14 00:56:49.730755 | orchestrator | 2025-04-14 00:56:49.730767 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-04-14 00:56:49.730780 | orchestrator | Monday 14 April 2025 00:49:38 +0000 (0:00:03.040) 0:00:44.920 ********** 2025-04-14 00:56:49.730792 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-14 00:56:49.730805 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-14 00:56:49.730817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-04-14 00:56:49.730829 | orchestrator | 2025-04-14 00:56:49.730842 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-04-14 00:56:49.730855 | orchestrator | Monday 14 April 2025 00:49:44 +0000 (0:00:06.074) 0:00:50.995 ********** 2025-04-14 00:56:49.730908 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.730939 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.730952 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.730964 | orchestrator | 2025-04-14 00:56:49.731125 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-04-14 00:56:49.731140 | orchestrator | Monday 14 April 2025 00:49:45 +0000 (0:00:01.209) 0:00:52.204 ********** 2025-04-14 00:56:49.731154 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-14 00:56:49.731355 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-14 00:56:49.731372 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-04-14 00:56:49.731385 | orchestrator | 2025-04-14 00:56:49.731398 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-04-14 00:56:49.731410 | orchestrator | Monday 14 April 2025 00:49:48 +0000 (0:00:03.197) 0:00:55.401 ********** 2025-04-14 00:56:49.731431 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-14 00:56:49.731444 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-14 00:56:49.731456 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-04-14 00:56:49.731469 | orchestrator | 2025-04-14 00:56:49.731481 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-04-14 00:56:49.731494 | orchestrator | Monday 14 April 2025 00:49:51 +0000 (0:00:03.319) 0:00:58.721 ********** 2025-04-14 00:56:49.731506 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-04-14 00:56:49.731529 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-04-14 00:56:49.731541 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-04-14 00:56:49.731554 | orchestrator | 2025-04-14 00:56:49.731567 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-04-14 00:56:49.731579 | orchestrator | Monday 14 April 2025 00:49:54 +0000 (0:00:02.333) 0:01:01.054 ********** 2025-04-14 00:56:49.731592 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-04-14 00:56:49.731604 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-04-14 00:56:49.731617 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-04-14 00:56:49.731629 | orchestrator | 2025-04-14 00:56:49.731641 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-04-14 00:56:49.731654 | orchestrator | Monday 14 April 2025 00:49:56 +0000 (0:00:02.694) 0:01:03.748 ********** 2025-04-14 00:56:49.731666 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.731679 | orchestrator | 2025-04-14 00:56:49.731691 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-04-14 00:56:49.731703 | orchestrator | Monday 14 April 2025 00:49:57 +0000 (0:00:00.949) 0:01:04.698 ********** 2025-04-14 00:56:49.731717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.731741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.731768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.731785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.731796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.731807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.731818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.731836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.732162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.732184 | orchestrator | 2025-04-14 00:56:49.732201 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-04-14 00:56:49.732219 | orchestrator | Monday 14 April 2025 00:50:01 +0000 (0:00:03.706) 0:01:08.404 ********** 2025-04-14 00:56:49.732268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.732283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.732294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.732309 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.732327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.732344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.732378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.732395 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.732413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.732438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.732453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.732470 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.732486 | orchestrator | 2025-04-14 00:56:49.732502 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-04-14 00:56:49.732519 | orchestrator | Monday 14 April 2025 00:50:02 +0000 (0:00:00.864) 0:01:09.268 ********** 2025-04-14 00:56:49.732536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.732553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.732585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.732603 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.732625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.732636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.732646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.732656 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.732667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-04-14 00:56:49.732677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-04-14 00:56:49.732688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-04-14 00:56:49.732699 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.732709 | orchestrator | 2025-04-14 00:56:49.732724 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-04-14 00:56:49.732740 | orchestrator | Monday 14 April 2025 00:50:03 +0000 (0:00:01.061) 0:01:10.329 ********** 2025-04-14 00:56:49.732751 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-14 00:56:49.732761 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-14 00:56:49.732771 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-04-14 00:56:49.732781 | orchestrator | 2025-04-14 00:56:49.732792 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-04-14 00:56:49.732802 | orchestrator | Monday 14 April 2025 00:50:05 +0000 (0:00:02.314) 0:01:12.643 ********** 2025-04-14 00:56:49.732812 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-14 00:56:49.732823 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-14 00:56:49.732841 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-04-14 00:56:49.732873 | orchestrator | 2025-04-14 00:56:49.732890 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-04-14 00:56:49.732907 | orchestrator | Monday 14 April 2025 00:50:09 +0000 (0:00:03.738) 0:01:16.382 ********** 2025-04-14 00:56:49.732922 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-14 00:56:49.732937 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-14 00:56:49.733536 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-04-14 00:56:49.733572 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-14 00:56:49.733583 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.733594 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-14 00:56:49.733604 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.733614 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-14 00:56:49.733624 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.733635 | orchestrator | 2025-04-14 00:56:49.733645 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-04-14 00:56:49.733655 | orchestrator | Monday 14 April 2025 00:50:11 +0000 (0:00:01.920) 0:01:18.303 ********** 2025-04-14 00:56:49.733674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.733686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.733697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-04-14 00:56:49.733728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.733743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.733755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-04-14 00:56:49.733765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.733776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.733787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.733810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.733821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-04-14 00:56:49.733832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8', '__omit_place_holder__44e02f96e554a2db4fe3e129e49520accfcb1db8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-04-14 00:56:49.733842 | orchestrator | 2025-04-14 00:56:49.733853 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-04-14 00:56:49.733863 | orchestrator | Monday 14 April 2025 00:50:14 +0000 (0:00:03.124) 0:01:21.427 ********** 2025-04-14 00:56:49.733873 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.733883 | orchestrator | 2025-04-14 00:56:49.733894 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-04-14 00:56:49.733904 | orchestrator | Monday 14 April 2025 00:50:15 +0000 (0:00:01.112) 0:01:22.539 ********** 2025-04-14 00:56:49.733914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-14 00:56:49.733926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.733943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-14 00:56:49.738384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.738401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-04-14 00:56:49.738435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.738500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738547 | orchestrator | 2025-04-14 00:56:49.738563 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-04-14 00:56:49.738577 | orchestrator | Monday 14 April 2025 00:50:21 +0000 (0:00:05.419) 0:01:27.959 ********** 2025-04-14 00:56:49.738592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-14 00:56:49.738617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.738632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738682 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.738698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-14 00:56:49.738712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.738727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738763 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.738787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-04-14 00:56:49.738810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.738826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.738855 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.738875 | orchestrator | 2025-04-14 00:56:49.738890 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-04-14 00:56:49.738904 | orchestrator | Monday 14 April 2025 00:50:22 +0000 (0:00:01.251) 0:01:29.211 ********** 2025-04-14 00:56:49.738919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-14 00:56:49.738934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-14 00:56:49.738954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-14 00:56:49.738969 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.738983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-14 00:56:49.738998 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.739013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-04-14 00:56:49.739027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-04-14 00:56:49.739041 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.739055 | orchestrator | 2025-04-14 00:56:49.739069 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-04-14 00:56:49.739084 | orchestrator | Monday 14 April 2025 00:50:24 +0000 (0:00:01.871) 0:01:31.082 ********** 2025-04-14 00:56:49.739098 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.739112 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.739126 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.739139 | orchestrator | 2025-04-14 00:56:49.739153 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-04-14 00:56:49.739167 | orchestrator | Monday 14 April 2025 00:50:25 +0000 (0:00:01.480) 0:01:32.562 ********** 2025-04-14 00:56:49.739181 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.739195 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.739210 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.739224 | orchestrator | 2025-04-14 00:56:49.739256 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-04-14 00:56:49.739271 | orchestrator | Monday 14 April 2025 00:50:28 +0000 (0:00:02.360) 0:01:34.923 ********** 2025-04-14 00:56:49.739291 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.739305 | orchestrator | 2025-04-14 00:56:49.739319 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-04-14 00:56:49.739333 | orchestrator | Monday 14 April 2025 00:50:29 +0000 (0:00:00.983) 0:01:35.907 ********** 2025-04-14 00:56:49.739373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.739403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.739458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.739549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739580 | orchestrator | 2025-04-14 00:56:49.739594 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-04-14 00:56:49.739608 | orchestrator | Monday 14 April 2025 00:50:34 +0000 (0:00:05.638) 0:01:41.546 ********** 2025-04-14 00:56:49.739623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.739657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.739680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739736 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.739751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739765 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.739797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.739813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.739849 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.739864 | orchestrator | 2025-04-14 00:56:49.739878 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-04-14 00:56:49.739892 | orchestrator | Monday 14 April 2025 00:50:35 +0000 (0:00:01.124) 0:01:42.670 ********** 2025-04-14 00:56:49.739906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-14 00:56:49.739921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-14 00:56:49.739935 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.739949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-14 00:56:49.739970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-14 00:56:49.739985 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.740012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-14 00:56:49.740028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-04-14 00:56:49.740053 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.740068 | orchestrator | 2025-04-14 00:56:49.740082 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-04-14 00:56:49.740096 | orchestrator | Monday 14 April 2025 00:50:36 +0000 (0:00:01.007) 0:01:43.677 ********** 2025-04-14 00:56:49.740110 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.740124 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.740138 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.740151 | orchestrator | 2025-04-14 00:56:49.740165 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-04-14 00:56:49.740179 | orchestrator | Monday 14 April 2025 00:50:38 +0000 (0:00:01.945) 0:01:45.623 ********** 2025-04-14 00:56:49.740193 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.740207 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.740228 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.740273 | orchestrator | 2025-04-14 00:56:49.740288 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-04-14 00:56:49.740302 | orchestrator | Monday 14 April 2025 00:50:41 +0000 (0:00:02.651) 0:01:48.275 ********** 2025-04-14 00:56:49.740315 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.740329 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.740343 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.740357 | orchestrator | 2025-04-14 00:56:49.740387 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-04-14 00:56:49.740403 | orchestrator | Monday 14 April 2025 00:50:41 +0000 (0:00:00.280) 0:01:48.556 ********** 2025-04-14 00:56:49.740417 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.740431 | orchestrator | 2025-04-14 00:56:49.740445 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-04-14 00:56:49.740459 | orchestrator | Monday 14 April 2025 00:50:42 +0000 (0:00:01.000) 0:01:49.556 ********** 2025-04-14 00:56:49.740485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-14 00:56:49.740501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-14 00:56:49.740516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-04-14 00:56:49.740531 | orchestrator | 2025-04-14 00:56:49.740545 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-04-14 00:56:49.740559 | orchestrator | Monday 14 April 2025 00:50:45 +0000 (0:00:03.138) 0:01:52.695 ********** 2025-04-14 00:56:49.740582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-14 00:56:49.740604 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.740638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-14 00:56:49.740655 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.740669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-04-14 00:56:49.740684 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.740698 | orchestrator | 2025-04-14 00:56:49.740712 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-04-14 00:56:49.740726 | orchestrator | Monday 14 April 2025 00:50:47 +0000 (0:00:01.860) 0:01:54.555 ********** 2025-04-14 00:56:49.740740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-14 00:56:49.740756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-14 00:56:49.740771 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.740786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-14 00:56:49.740807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-14 00:56:49.740821 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.740835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-14 00:56:49.740871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-04-14 00:56:49.740888 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.740902 | orchestrator | 2025-04-14 00:56:49.740916 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-04-14 00:56:49.740930 | orchestrator | Monday 14 April 2025 00:50:49 +0000 (0:00:02.139) 0:01:56.695 ********** 2025-04-14 00:56:49.740944 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.740958 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.740972 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.740986 | orchestrator | 2025-04-14 00:56:49.741000 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-04-14 00:56:49.741014 | orchestrator | Monday 14 April 2025 00:50:50 +0000 (0:00:00.730) 0:01:57.426 ********** 2025-04-14 00:56:49.741028 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.741042 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.741056 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.741070 | orchestrator | 2025-04-14 00:56:49.741084 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-04-14 00:56:49.741098 | orchestrator | Monday 14 April 2025 00:50:52 +0000 (0:00:01.482) 0:01:58.909 ********** 2025-04-14 00:56:49.741112 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.741125 | orchestrator | 2025-04-14 00:56:49.741139 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-04-14 00:56:49.741153 | orchestrator | Monday 14 April 2025 00:50:52 +0000 (0:00:00.837) 0:01:59.746 ********** 2025-04-14 00:56:49.741167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.741190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.741333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.741432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741498 | orchestrator | 2025-04-14 00:56:49.741512 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-04-14 00:56:49.741531 | orchestrator | Monday 14 April 2025 00:50:57 +0000 (0:00:04.408) 0:02:04.154 ********** 2025-04-14 00:56:49.741546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.741561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741637 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.741658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.741673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741741 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.741755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.741774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.741822 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.741835 | orchestrator | 2025-04-14 00:56:49.741848 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-04-14 00:56:49.741860 | orchestrator | Monday 14 April 2025 00:50:59 +0000 (0:00:01.743) 0:02:05.897 ********** 2025-04-14 00:56:49.741873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-14 00:56:49.741900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-14 00:56:49.741915 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.741927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-14 00:56:49.741940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-14 00:56:49.741954 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.741966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-14 00:56:49.741985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-04-14 00:56:49.741998 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.742011 | orchestrator | 2025-04-14 00:56:49.742058 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-04-14 00:56:49.742072 | orchestrator | Monday 14 April 2025 00:51:01 +0000 (0:00:01.972) 0:02:07.870 ********** 2025-04-14 00:56:49.742085 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.742097 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.742110 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.742122 | orchestrator | 2025-04-14 00:56:49.742135 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-04-14 00:56:49.742147 | orchestrator | Monday 14 April 2025 00:51:02 +0000 (0:00:01.575) 0:02:09.446 ********** 2025-04-14 00:56:49.742160 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.742172 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.742184 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.742197 | orchestrator | 2025-04-14 00:56:49.742210 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-04-14 00:56:49.742222 | orchestrator | Monday 14 April 2025 00:51:04 +0000 (0:00:02.404) 0:02:11.851 ********** 2025-04-14 00:56:49.742253 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.742267 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.742279 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.742297 | orchestrator | 2025-04-14 00:56:49.742310 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-04-14 00:56:49.742322 | orchestrator | Monday 14 April 2025 00:51:05 +0000 (0:00:00.375) 0:02:12.226 ********** 2025-04-14 00:56:49.742335 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.742347 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.742359 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.742372 | orchestrator | 2025-04-14 00:56:49.742384 | orchestrator | TASK [include_role : designate] ************************************************ 2025-04-14 00:56:49.742396 | orchestrator | Monday 14 April 2025 00:51:05 +0000 (0:00:00.548) 0:02:12.774 ********** 2025-04-14 00:56:49.742409 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.742421 | orchestrator | 2025-04-14 00:56:49.742434 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-04-14 00:56:49.742446 | orchestrator | Monday 14 April 2025 00:51:06 +0000 (0:00:01.084) 0:02:13.859 ********** 2025-04-14 00:56:49.742460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 00:56:49.742490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 00:56:49.742512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 00:56:49.742625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 00:56:49.742640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 00:56:49.742751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 00:56:49.742776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742848 | orchestrator | 2025-04-14 00:56:49.742876 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-04-14 00:56:49.742890 | orchestrator | Monday 14 April 2025 00:51:12 +0000 (0:00:05.332) 0:02:19.192 ********** 2025-04-14 00:56:49.742903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 00:56:49.742925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 00:56:49.742939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.742999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743027 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.743048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 00:56:49.743062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 00:56:49.743076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743175 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.743188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 00:56:49.743201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 00:56:49.743215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.743327 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.743339 | orchestrator | 2025-04-14 00:56:49.743352 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-04-14 00:56:49.743365 | orchestrator | Monday 14 April 2025 00:51:13 +0000 (0:00:01.055) 0:02:20.248 ********** 2025-04-14 00:56:49.743377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-14 00:56:49.743390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-14 00:56:49.743403 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.743416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-14 00:56:49.743434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-14 00:56:49.743447 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.743459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-04-14 00:56:49.743476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-04-14 00:56:49.743488 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.743501 | orchestrator | 2025-04-14 00:56:49.743513 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-04-14 00:56:49.743526 | orchestrator | Monday 14 April 2025 00:51:14 +0000 (0:00:01.472) 0:02:21.720 ********** 2025-04-14 00:56:49.743538 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.743551 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.743563 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.743575 | orchestrator | 2025-04-14 00:56:49.743588 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-04-14 00:56:49.743600 | orchestrator | Monday 14 April 2025 00:51:16 +0000 (0:00:01.383) 0:02:23.103 ********** 2025-04-14 00:56:49.743612 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.743625 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.743637 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.743649 | orchestrator | 2025-04-14 00:56:49.743662 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-04-14 00:56:49.743674 | orchestrator | Monday 14 April 2025 00:51:18 +0000 (0:00:02.145) 0:02:25.249 ********** 2025-04-14 00:56:49.743686 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.743699 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.743711 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.743724 | orchestrator | 2025-04-14 00:56:49.743736 | orchestrator | TASK [include_role : glance] *************************************************** 2025-04-14 00:56:49.743763 | orchestrator | Monday 14 April 2025 00:51:18 +0000 (0:00:00.564) 0:02:25.814 ********** 2025-04-14 00:56:49.743779 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.743792 | orchestrator | 2025-04-14 00:56:49.743805 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-04-14 00:56:49.743817 | orchestrator | Monday 14 April 2025 00:51:20 +0000 (0:00:01.293) 0:02:27.107 ********** 2025-04-14 00:56:49.743831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 00:56:49.743860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.743892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 00:56:49.743921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.743952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 00:56:49.743980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.743999 | orchestrator | 2025-04-14 00:56:49.744012 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-04-14 00:56:49.744025 | orchestrator | Monday 14 April 2025 00:51:26 +0000 (0:00:06.706) 0:02:33.814 ********** 2025-04-14 00:56:49.744054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 00:56:49.744078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.744098 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.744134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 00:56:49.744149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.744177 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.744190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 00:56:49.744220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.744298 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.744313 | orchestrator | 2025-04-14 00:56:49.744325 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-04-14 00:56:49.744343 | orchestrator | Monday 14 April 2025 00:51:32 +0000 (0:00:05.742) 0:02:39.556 ********** 2025-04-14 00:56:49.744356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-14 00:56:49.744370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-14 00:56:49.744383 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.744397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-14 00:56:49.744425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-14 00:56:49.744439 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.744453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-14 00:56:49.744466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-04-14 00:56:49.744485 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.744498 | orchestrator | 2025-04-14 00:56:49.744510 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-04-14 00:56:49.744523 | orchestrator | Monday 14 April 2025 00:51:39 +0000 (0:00:06.351) 0:02:45.907 ********** 2025-04-14 00:56:49.744535 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.744547 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.744560 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.744572 | orchestrator | 2025-04-14 00:56:49.744585 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-04-14 00:56:49.744597 | orchestrator | Monday 14 April 2025 00:51:40 +0000 (0:00:01.488) 0:02:47.396 ********** 2025-04-14 00:56:49.744610 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.744622 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.744635 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.744647 | orchestrator | 2025-04-14 00:56:49.744659 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-04-14 00:56:49.744671 | orchestrator | Monday 14 April 2025 00:51:42 +0000 (0:00:02.367) 0:02:49.763 ********** 2025-04-14 00:56:49.744684 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.744696 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.744708 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.744721 | orchestrator | 2025-04-14 00:56:49.744733 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-04-14 00:56:49.744745 | orchestrator | Monday 14 April 2025 00:51:43 +0000 (0:00:00.500) 0:02:50.264 ********** 2025-04-14 00:56:49.744758 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.744770 | orchestrator | 2025-04-14 00:56:49.744782 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-04-14 00:56:49.744795 | orchestrator | Monday 14 April 2025 00:51:44 +0000 (0:00:01.218) 0:02:51.482 ********** 2025-04-14 00:56:49.744806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 00:56:49.744818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 00:56:49.744844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 00:56:49.744860 | orchestrator | 2025-04-14 00:56:49.744871 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-04-14 00:56:49.744881 | orchestrator | Monday 14 April 2025 00:51:48 +0000 (0:00:04.038) 0:02:55.521 ********** 2025-04-14 00:56:49.744892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 00:56:49.744903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 00:56:49.744914 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.744924 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.744934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 00:56:49.744945 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.744955 | orchestrator | 2025-04-14 00:56:49.744965 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-04-14 00:56:49.744976 | orchestrator | Monday 14 April 2025 00:51:49 +0000 (0:00:00.392) 0:02:55.914 ********** 2025-04-14 00:56:49.744986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-14 00:56:49.744996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-14 00:56:49.745007 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.745017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-14 00:56:49.745027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-14 00:56:49.745043 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.745054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-04-14 00:56:49.745076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-04-14 00:56:49.745088 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.745098 | orchestrator | 2025-04-14 00:56:49.745109 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-04-14 00:56:49.745119 | orchestrator | Monday 14 April 2025 00:51:50 +0000 (0:00:00.955) 0:02:56.870 ********** 2025-04-14 00:56:49.745129 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.745140 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.745150 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.745160 | orchestrator | 2025-04-14 00:56:49.745170 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-04-14 00:56:49.745180 | orchestrator | Monday 14 April 2025 00:51:51 +0000 (0:00:01.126) 0:02:57.996 ********** 2025-04-14 00:56:49.745190 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.745200 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.745210 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.745220 | orchestrator | 2025-04-14 00:56:49.745230 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-04-14 00:56:49.745256 | orchestrator | Monday 14 April 2025 00:51:53 +0000 (0:00:02.146) 0:03:00.142 ********** 2025-04-14 00:56:49.745266 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.745276 | orchestrator | 2025-04-14 00:56:49.745287 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-04-14 00:56:49.745297 | orchestrator | Monday 14 April 2025 00:51:54 +0000 (0:00:01.203) 0:03:01.346 ********** 2025-04-14 00:56:49.745316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.745328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.745344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.745369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.745382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.745393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.745413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.745425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.745441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.745452 | orchestrator | 2025-04-14 00:56:49.745475 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-04-14 00:56:49.745487 | orchestrator | Monday 14 April 2025 00:52:01 +0000 (0:00:07.160) 0:03:08.507 ********** 2025-04-14 00:56:49.745497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.745508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.745526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.745543 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.745554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.745576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.745587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.745598 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.745609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.745628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.745643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.745653 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.745664 | orchestrator | 2025-04-14 00:56:49.745674 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-04-14 00:56:49.745684 | orchestrator | Monday 14 April 2025 00:52:02 +0000 (0:00:01.116) 0:03:09.624 ********** 2025-04-14 00:56:49.745694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745751 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.745762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745808 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.745818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-04-14 00:56:49.745864 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.745878 | orchestrator | 2025-04-14 00:56:49.745889 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-04-14 00:56:49.745899 | orchestrator | Monday 14 April 2025 00:52:04 +0000 (0:00:01.262) 0:03:10.887 ********** 2025-04-14 00:56:49.745909 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.745919 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.745929 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.745939 | orchestrator | 2025-04-14 00:56:49.745949 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-04-14 00:56:49.745959 | orchestrator | Monday 14 April 2025 00:52:05 +0000 (0:00:01.382) 0:03:12.269 ********** 2025-04-14 00:56:49.745969 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.745979 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.745989 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.746091 | orchestrator | 2025-04-14 00:56:49.746109 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-04-14 00:56:49.746119 | orchestrator | Monday 14 April 2025 00:52:07 +0000 (0:00:02.246) 0:03:14.515 ********** 2025-04-14 00:56:49.746130 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.746140 | orchestrator | 2025-04-14 00:56:49.746150 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-04-14 00:56:49.746160 | orchestrator | Monday 14 April 2025 00:52:08 +0000 (0:00:01.102) 0:03:15.617 ********** 2025-04-14 00:56:49.746187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 00:56:49.746206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 00:56:49.746232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 00:56:49.746262 | orchestrator | 2025-04-14 00:56:49.746273 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-04-14 00:56:49.746284 | orchestrator | Monday 14 April 2025 00:52:13 +0000 (0:00:04.542) 0:03:20.159 ********** 2025-04-14 00:56:49.746295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 00:56:49.746306 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.746331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 00:56:49.746348 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.746360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 00:56:49.746371 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.746381 | orchestrator | 2025-04-14 00:56:49.746403 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-04-14 00:56:49.746414 | orchestrator | Monday 14 April 2025 00:52:14 +0000 (0:00:00.855) 0:03:21.015 ********** 2025-04-14 00:56:49.746425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-14 00:56:49.746437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-14 00:56:49.746453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-14 00:56:49.746465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-14 00:56:49.746475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-14 00:56:49.746486 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.746500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-14 00:56:49.746511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-14 00:56:49.746521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-14 00:56:49.746532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-14 00:56:49.746542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-14 00:56:49.746552 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.746563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-14 00:56:49.746573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-14 00:56:49.746595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-04-14 00:56:49.746607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-04-14 00:56:49.746623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-04-14 00:56:49.746633 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.746644 | orchestrator | 2025-04-14 00:56:49.746654 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-04-14 00:56:49.746664 | orchestrator | Monday 14 April 2025 00:52:15 +0000 (0:00:01.457) 0:03:22.472 ********** 2025-04-14 00:56:49.746674 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.746684 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.746694 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.746704 | orchestrator | 2025-04-14 00:56:49.746714 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-04-14 00:56:49.746724 | orchestrator | Monday 14 April 2025 00:52:17 +0000 (0:00:01.400) 0:03:23.873 ********** 2025-04-14 00:56:49.746735 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.746745 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.746755 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.746765 | orchestrator | 2025-04-14 00:56:49.746775 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-04-14 00:56:49.746785 | orchestrator | Monday 14 April 2025 00:52:19 +0000 (0:00:02.818) 0:03:26.691 ********** 2025-04-14 00:56:49.746795 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.746805 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.746815 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.746825 | orchestrator | 2025-04-14 00:56:49.746835 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-04-14 00:56:49.746846 | orchestrator | Monday 14 April 2025 00:52:20 +0000 (0:00:00.534) 0:03:27.226 ********** 2025-04-14 00:56:49.746856 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.746866 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.746876 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.746886 | orchestrator | 2025-04-14 00:56:49.746896 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-04-14 00:56:49.746906 | orchestrator | Monday 14 April 2025 00:52:20 +0000 (0:00:00.332) 0:03:27.559 ********** 2025-04-14 00:56:49.746916 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.746927 | orchestrator | 2025-04-14 00:56:49.746937 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-04-14 00:56:49.746947 | orchestrator | Monday 14 April 2025 00:52:22 +0000 (0:00:01.386) 0:03:28.946 ********** 2025-04-14 00:56:49.746957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 00:56:49.746983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 00:56:49.747000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 00:56:49.747012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 00:56:49.747023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 00:56:49.747034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 00:56:49.747045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 00:56:49.747077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 00:56:49.747090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 00:56:49.747100 | orchestrator | 2025-04-14 00:56:49.747111 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-04-14 00:56:49.747121 | orchestrator | Monday 14 April 2025 00:52:26 +0000 (0:00:04.386) 0:03:33.332 ********** 2025-04-14 00:56:49.747132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 00:56:49.747143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 00:56:49.747154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 00:56:49.747169 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.747194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 00:56:49.747206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 00:56:49.747216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 00:56:49.747227 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.747277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 00:56:49.747290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 00:56:49.747306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 00:56:49.747317 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.747327 | orchestrator | 2025-04-14 00:56:49.747337 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-04-14 00:56:49.747348 | orchestrator | Monday 14 April 2025 00:52:27 +0000 (0:00:01.021) 0:03:34.354 ********** 2025-04-14 00:56:49.747371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-14 00:56:49.747385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-14 00:56:49.747396 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.747406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-14 00:56:49.747417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-14 00:56:49.747428 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.747438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-14 00:56:49.747448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-04-14 00:56:49.747459 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.747469 | orchestrator | 2025-04-14 00:56:49.747479 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-04-14 00:56:49.747489 | orchestrator | Monday 14 April 2025 00:52:28 +0000 (0:00:00.986) 0:03:35.340 ********** 2025-04-14 00:56:49.747499 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.747509 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.747519 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.747529 | orchestrator | 2025-04-14 00:56:49.747539 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-04-14 00:56:49.747554 | orchestrator | Monday 14 April 2025 00:52:30 +0000 (0:00:01.530) 0:03:36.871 ********** 2025-04-14 00:56:49.747565 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.747575 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.747585 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.747595 | orchestrator | 2025-04-14 00:56:49.747605 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-04-14 00:56:49.747615 | orchestrator | Monday 14 April 2025 00:52:32 +0000 (0:00:02.398) 0:03:39.269 ********** 2025-04-14 00:56:49.747623 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.747632 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.747640 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.747649 | orchestrator | 2025-04-14 00:56:49.747661 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-04-14 00:56:49.747670 | orchestrator | Monday 14 April 2025 00:52:32 +0000 (0:00:00.361) 0:03:39.630 ********** 2025-04-14 00:56:49.747678 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.747687 | orchestrator | 2025-04-14 00:56:49.747695 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-04-14 00:56:49.747703 | orchestrator | Monday 14 April 2025 00:52:34 +0000 (0:00:01.389) 0:03:41.019 ********** 2025-04-14 00:56:49.747712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 00:56:49.747734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.747745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 00:56:49.747754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.747770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 00:56:49.747780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.747788 | orchestrator | 2025-04-14 00:56:49.747797 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-04-14 00:56:49.747806 | orchestrator | Monday 14 April 2025 00:52:38 +0000 (0:00:04.518) 0:03:45.538 ********** 2025-04-14 00:56:49.747826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 00:56:49.747837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.747851 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.747860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 00:56:49.747869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.747877 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.747898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 00:56:49.747908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.747917 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.747926 | orchestrator | 2025-04-14 00:56:49.747935 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-04-14 00:56:49.747948 | orchestrator | Monday 14 April 2025 00:52:40 +0000 (0:00:01.395) 0:03:46.934 ********** 2025-04-14 00:56:49.747956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-14 00:56:49.747965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-14 00:56:49.747977 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.747986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-14 00:56:49.747995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-14 00:56:49.748003 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.748012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-04-14 00:56:49.748021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-04-14 00:56:49.748029 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.748038 | orchestrator | 2025-04-14 00:56:49.748047 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-04-14 00:56:49.748055 | orchestrator | Monday 14 April 2025 00:52:41 +0000 (0:00:01.389) 0:03:48.324 ********** 2025-04-14 00:56:49.748064 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.748072 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.748080 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.748089 | orchestrator | 2025-04-14 00:56:49.748097 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-04-14 00:56:49.748106 | orchestrator | Monday 14 April 2025 00:52:42 +0000 (0:00:01.405) 0:03:49.729 ********** 2025-04-14 00:56:49.748114 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.748123 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.748131 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.748140 | orchestrator | 2025-04-14 00:56:49.748149 | orchestrator | TASK [include_role : manila] *************************************************** 2025-04-14 00:56:49.748157 | orchestrator | Monday 14 April 2025 00:52:45 +0000 (0:00:02.327) 0:03:52.057 ********** 2025-04-14 00:56:49.748166 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.748174 | orchestrator | 2025-04-14 00:56:49.748183 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-04-14 00:56:49.748191 | orchestrator | Monday 14 April 2025 00:52:46 +0000 (0:00:01.335) 0:03:53.393 ********** 2025-04-14 00:56:49.748212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-14 00:56:49.748227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-14 00:56:49.748274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-04-14 00:56:49.748322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748349 | orchestrator | 2025-04-14 00:56:49.748358 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-04-14 00:56:49.748367 | orchestrator | Monday 14 April 2025 00:52:52 +0000 (0:00:05.655) 0:03:59.048 ********** 2025-04-14 00:56:49.748379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-14 00:56:49.748394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748422 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.748445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-14 00:56:49.748455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748501 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.748512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-04-14 00:56:49.748521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.748562 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.748572 | orchestrator | 2025-04-14 00:56:49.748581 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-04-14 00:56:49.748589 | orchestrator | Monday 14 April 2025 00:52:53 +0000 (0:00:00.982) 0:04:00.031 ********** 2025-04-14 00:56:49.748598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-14 00:56:49.748618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-14 00:56:49.748628 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.748637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-14 00:56:49.748646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-14 00:56:49.748655 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.748663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-04-14 00:56:49.748672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-04-14 00:56:49.748681 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.748689 | orchestrator | 2025-04-14 00:56:49.748698 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-04-14 00:56:49.748707 | orchestrator | Monday 14 April 2025 00:52:54 +0000 (0:00:01.233) 0:04:01.264 ********** 2025-04-14 00:56:49.748715 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.748724 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.748733 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.748741 | orchestrator | 2025-04-14 00:56:49.748750 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-04-14 00:56:49.748758 | orchestrator | Monday 14 April 2025 00:52:55 +0000 (0:00:01.367) 0:04:02.632 ********** 2025-04-14 00:56:49.748767 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.748775 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.748784 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.748792 | orchestrator | 2025-04-14 00:56:49.748801 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-04-14 00:56:49.748810 | orchestrator | Monday 14 April 2025 00:52:58 +0000 (0:00:02.456) 0:04:05.088 ********** 2025-04-14 00:56:49.748818 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.748827 | orchestrator | 2025-04-14 00:56:49.748835 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-04-14 00:56:49.748844 | orchestrator | Monday 14 April 2025 00:52:59 +0000 (0:00:01.501) 0:04:06.590 ********** 2025-04-14 00:56:49.748852 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 00:56:49.748861 | orchestrator | 2025-04-14 00:56:49.748869 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-04-14 00:56:49.748878 | orchestrator | Monday 14 April 2025 00:53:03 +0000 (0:00:03.312) 0:04:09.902 ********** 2025-04-14 00:56:49.748887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-14 00:56:49.748922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-14 00:56:49.748932 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.748941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-14 00:56:49.748955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-14 00:56:49.748964 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.748977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-14 00:56:49.748993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-14 00:56:49.749003 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749011 | orchestrator | 2025-04-14 00:56:49.749020 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-04-14 00:56:49.749029 | orchestrator | Monday 14 April 2025 00:53:06 +0000 (0:00:03.447) 0:04:13.349 ********** 2025-04-14 00:56:49.749038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-14 00:56:49.749072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-14 00:56:49.749083 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-14 00:56:49.749106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-14 00:56:49.749115 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-04-14 00:56:49.749153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-04-14 00:56:49.749162 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749171 | orchestrator | 2025-04-14 00:56:49.749179 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-04-14 00:56:49.749188 | orchestrator | Monday 14 April 2025 00:53:10 +0000 (0:00:03.634) 0:04:16.984 ********** 2025-04-14 00:56:49.749197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-14 00:56:49.749211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-14 00:56:49.749219 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-14 00:56:49.749274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-14 00:56:49.749284 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-14 00:56:49.749323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-04-14 00:56:49.749332 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749340 | orchestrator | 2025-04-14 00:56:49.749348 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-04-14 00:56:49.749356 | orchestrator | Monday 14 April 2025 00:53:13 +0000 (0:00:03.568) 0:04:20.553 ********** 2025-04-14 00:56:49.749364 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.749372 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.749380 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.749387 | orchestrator | 2025-04-14 00:56:49.749396 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-04-14 00:56:49.749409 | orchestrator | Monday 14 April 2025 00:53:15 +0000 (0:00:02.236) 0:04:22.789 ********** 2025-04-14 00:56:49.749417 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749425 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749432 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749440 | orchestrator | 2025-04-14 00:56:49.749448 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-04-14 00:56:49.749456 | orchestrator | Monday 14 April 2025 00:53:17 +0000 (0:00:01.944) 0:04:24.734 ********** 2025-04-14 00:56:49.749464 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749472 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749479 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749487 | orchestrator | 2025-04-14 00:56:49.749495 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-04-14 00:56:49.749503 | orchestrator | Monday 14 April 2025 00:53:18 +0000 (0:00:00.312) 0:04:25.047 ********** 2025-04-14 00:56:49.749511 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.749518 | orchestrator | 2025-04-14 00:56:49.749526 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-04-14 00:56:49.749534 | orchestrator | Monday 14 April 2025 00:53:19 +0000 (0:00:01.503) 0:04:26.551 ********** 2025-04-14 00:56:49.749542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-14 00:56:49.749551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-14 00:56:49.749572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-04-14 00:56:49.749581 | orchestrator | 2025-04-14 00:56:49.749589 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-04-14 00:56:49.749597 | orchestrator | Monday 14 April 2025 00:53:21 +0000 (0:00:01.712) 0:04:28.264 ********** 2025-04-14 00:56:49.749610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-14 00:56:49.749618 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-14 00:56:49.749635 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-04-14 00:56:49.749660 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749668 | orchestrator | 2025-04-14 00:56:49.749676 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-04-14 00:56:49.749684 | orchestrator | Monday 14 April 2025 00:53:21 +0000 (0:00:00.573) 0:04:28.837 ********** 2025-04-14 00:56:49.749692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-14 00:56:49.749700 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-14 00:56:49.749717 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-04-14 00:56:49.749733 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749741 | orchestrator | 2025-04-14 00:56:49.749759 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-04-14 00:56:49.749773 | orchestrator | Monday 14 April 2025 00:53:22 +0000 (0:00:00.782) 0:04:29.620 ********** 2025-04-14 00:56:49.749782 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749790 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749797 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749805 | orchestrator | 2025-04-14 00:56:49.749814 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-04-14 00:56:49.749821 | orchestrator | Monday 14 April 2025 00:53:23 +0000 (0:00:00.702) 0:04:30.323 ********** 2025-04-14 00:56:49.749829 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749837 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749845 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749853 | orchestrator | 2025-04-14 00:56:49.749861 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-04-14 00:56:49.749869 | orchestrator | Monday 14 April 2025 00:53:25 +0000 (0:00:01.663) 0:04:31.987 ********** 2025-04-14 00:56:49.749877 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.749885 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.749893 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.749901 | orchestrator | 2025-04-14 00:56:49.749909 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-04-14 00:56:49.749916 | orchestrator | Monday 14 April 2025 00:53:25 +0000 (0:00:00.314) 0:04:32.301 ********** 2025-04-14 00:56:49.749924 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.749932 | orchestrator | 2025-04-14 00:56:49.749940 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-04-14 00:56:49.749948 | orchestrator | Monday 14 April 2025 00:53:27 +0000 (0:00:01.609) 0:04:33.910 ********** 2025-04-14 00:56:49.749956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 00:56:49.749965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.749974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.749997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 00:56:49.750041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.750114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.750131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.750169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.750189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 00:56:49.750207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 00:56:49.750295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.750359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.750377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.750413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.750433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 00:56:49.750451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 00:56:49.750508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.750563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.750592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.750628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.750636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750645 | orchestrator | 2025-04-14 00:56:49.750653 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-04-14 00:56:49.750661 | orchestrator | Monday 14 April 2025 00:53:32 +0000 (0:00:05.181) 0:04:39.092 ********** 2025-04-14 00:56:49.750680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 00:56:49.750689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 00:56:49.750704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 00:56:49.750790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 00:56:49.750819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.750899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.750935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.750944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.750971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.750981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.750990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.751003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.751017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.751046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.751063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751077 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.751085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.751094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751102 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.751110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 00:56:49.751130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 00:56:49.751174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.751203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.751213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.751252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.751269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 00:56:49.751278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 00:56:49.751319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 00:56:49.751327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.751336 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.751344 | orchestrator | 2025-04-14 00:56:49.751352 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-04-14 00:56:49.751363 | orchestrator | Monday 14 April 2025 00:53:34 +0000 (0:00:01.984) 0:04:41.076 ********** 2025-04-14 00:56:49.751372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-14 00:56:49.751380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-14 00:56:49.751388 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.751398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-14 00:56:49.751406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-14 00:56:49.751414 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.751422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-04-14 00:56:49.751430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-04-14 00:56:49.751438 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.751449 | orchestrator | 2025-04-14 00:56:49.751457 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-04-14 00:56:49.751465 | orchestrator | Monday 14 April 2025 00:53:36 +0000 (0:00:02.075) 0:04:43.152 ********** 2025-04-14 00:56:49.751477 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.751485 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.751504 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.751513 | orchestrator | 2025-04-14 00:56:49.751522 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-04-14 00:56:49.751530 | orchestrator | Monday 14 April 2025 00:53:37 +0000 (0:00:01.413) 0:04:44.565 ********** 2025-04-14 00:56:49.751538 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.751546 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.751554 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.751562 | orchestrator | 2025-04-14 00:56:49.751570 | orchestrator | TASK [include_role : placement] ************************************************ 2025-04-14 00:56:49.751578 | orchestrator | Monday 14 April 2025 00:53:40 +0000 (0:00:02.445) 0:04:47.011 ********** 2025-04-14 00:56:49.751586 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.751594 | orchestrator | 2025-04-14 00:56:49.751602 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-04-14 00:56:49.751610 | orchestrator | Monday 14 April 2025 00:53:41 +0000 (0:00:01.702) 0:04:48.713 ********** 2025-04-14 00:56:49.751618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.751627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.751635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.751648 | orchestrator | 2025-04-14 00:56:49.751656 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-04-14 00:56:49.751664 | orchestrator | Monday 14 April 2025 00:53:45 +0000 (0:00:03.995) 0:04:52.709 ********** 2025-04-14 00:56:49.751689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.751699 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.751707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.751715 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.751724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.751732 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.751740 | orchestrator | 2025-04-14 00:56:49.751748 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-04-14 00:56:49.751756 | orchestrator | Monday 14 April 2025 00:53:46 +0000 (0:00:00.496) 0:04:53.205 ********** 2025-04-14 00:56:49.751764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-14 00:56:49.751772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-14 00:56:49.751785 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.751793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-14 00:56:49.751801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-14 00:56:49.751809 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.751818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-14 00:56:49.751826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-04-14 00:56:49.751834 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.751842 | orchestrator | 2025-04-14 00:56:49.751850 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-04-14 00:56:49.751869 | orchestrator | Monday 14 April 2025 00:53:47 +0000 (0:00:01.221) 0:04:54.427 ********** 2025-04-14 00:56:49.751877 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.751886 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.751894 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.751902 | orchestrator | 2025-04-14 00:56:49.751910 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-04-14 00:56:49.751918 | orchestrator | Monday 14 April 2025 00:53:48 +0000 (0:00:01.215) 0:04:55.642 ********** 2025-04-14 00:56:49.751926 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.751934 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.751942 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.751950 | orchestrator | 2025-04-14 00:56:49.751958 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-04-14 00:56:49.751966 | orchestrator | Monday 14 April 2025 00:53:51 +0000 (0:00:02.413) 0:04:58.056 ********** 2025-04-14 00:56:49.751974 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.751982 | orchestrator | 2025-04-14 00:56:49.751990 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-04-14 00:56:49.751998 | orchestrator | Monday 14 April 2025 00:53:52 +0000 (0:00:01.686) 0:04:59.742 ********** 2025-04-14 00:56:49.752012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.752021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.752066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.752102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752131 | orchestrator | 2025-04-14 00:56:49.752139 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-04-14 00:56:49.752147 | orchestrator | Monday 14 April 2025 00:53:58 +0000 (0:00:05.851) 0:05:05.594 ********** 2025-04-14 00:56:49.752156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.752170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752192 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.752200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.752220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752267 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.752283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.752297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.752314 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.752322 | orchestrator | 2025-04-14 00:56:49.752330 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-04-14 00:56:49.752338 | orchestrator | Monday 14 April 2025 00:53:59 +0000 (0:00:01.198) 0:05:06.792 ********** 2025-04-14 00:56:49.752346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752387 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.752394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752427 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.752434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-04-14 00:56:49.752463 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.752470 | orchestrator | 2025-04-14 00:56:49.752477 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-04-14 00:56:49.752484 | orchestrator | Monday 14 April 2025 00:54:01 +0000 (0:00:01.310) 0:05:08.103 ********** 2025-04-14 00:56:49.752491 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.752498 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.752505 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.752512 | orchestrator | 2025-04-14 00:56:49.752519 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-04-14 00:56:49.752525 | orchestrator | Monday 14 April 2025 00:54:02 +0000 (0:00:01.537) 0:05:09.640 ********** 2025-04-14 00:56:49.752533 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.752539 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.752546 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.752553 | orchestrator | 2025-04-14 00:56:49.752560 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-04-14 00:56:49.752567 | orchestrator | Monday 14 April 2025 00:54:05 +0000 (0:00:02.465) 0:05:12.106 ********** 2025-04-14 00:56:49.752574 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.752581 | orchestrator | 2025-04-14 00:56:49.752591 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-04-14 00:56:49.752598 | orchestrator | Monday 14 April 2025 00:54:07 +0000 (0:00:01.811) 0:05:13.917 ********** 2025-04-14 00:56:49.752605 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-04-14 00:56:49.752612 | orchestrator | 2025-04-14 00:56:49.752619 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-04-14 00:56:49.752626 | orchestrator | Monday 14 April 2025 00:54:08 +0000 (0:00:01.314) 0:05:15.231 ********** 2025-04-14 00:56:49.752643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-14 00:56:49.752651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-14 00:56:49.752663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-04-14 00:56:49.752671 | orchestrator | 2025-04-14 00:56:49.752678 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-04-14 00:56:49.752685 | orchestrator | Monday 14 April 2025 00:54:14 +0000 (0:00:06.089) 0:05:21.321 ********** 2025-04-14 00:56:49.752698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.752706 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.752713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.752720 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.752727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.752734 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.752741 | orchestrator | 2025-04-14 00:56:49.752748 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-04-14 00:56:49.752755 | orchestrator | Monday 14 April 2025 00:54:16 +0000 (0:00:02.041) 0:05:23.362 ********** 2025-04-14 00:56:49.752762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-14 00:56:49.752769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-14 00:56:49.752777 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.752784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-14 00:56:49.752808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-14 00:56:49.752816 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.752824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-14 00:56:49.752832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-04-14 00:56:49.752840 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.752847 | orchestrator | 2025-04-14 00:56:49.752854 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-14 00:56:49.752861 | orchestrator | Monday 14 April 2025 00:54:18 +0000 (0:00:02.186) 0:05:25.549 ********** 2025-04-14 00:56:49.752868 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.752874 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.752881 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.752888 | orchestrator | 2025-04-14 00:56:49.752895 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-14 00:56:49.752902 | orchestrator | Monday 14 April 2025 00:54:21 +0000 (0:00:03.180) 0:05:28.729 ********** 2025-04-14 00:56:49.752909 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.752916 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.752923 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.752930 | orchestrator | 2025-04-14 00:56:49.752937 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-04-14 00:56:49.752944 | orchestrator | Monday 14 April 2025 00:54:25 +0000 (0:00:03.638) 0:05:32.368 ********** 2025-04-14 00:56:49.752954 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-2, testbed-node-1 => (item=nova-spicehtml5proxy) 2025-04-14 00:56:49.752961 | orchestrator | 2025-04-14 00:56:49.752968 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-04-14 00:56:49.752975 | orchestrator | Monday 14 April 2025 00:54:26 +0000 (0:00:01.296) 0:05:33.664 ********** 2025-04-14 00:56:49.752982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.752989 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.752996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.753004 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.753011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.753022 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.753029 | orchestrator | 2025-04-14 00:56:49.753036 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-04-14 00:56:49.753043 | orchestrator | Monday 14 April 2025 00:54:28 +0000 (0:00:01.634) 0:05:35.298 ********** 2025-04-14 00:56:49.753066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.753074 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.753081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.753089 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.753096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-04-14 00:56:49.753103 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.753110 | orchestrator | 2025-04-14 00:56:49.753117 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-04-14 00:56:49.753124 | orchestrator | Monday 14 April 2025 00:54:30 +0000 (0:00:01.799) 0:05:37.098 ********** 2025-04-14 00:56:49.753131 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.753138 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.753145 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.753157 | orchestrator | 2025-04-14 00:56:49.753164 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-14 00:56:49.753171 | orchestrator | Monday 14 April 2025 00:54:32 +0000 (0:00:02.209) 0:05:39.308 ********** 2025-04-14 00:56:49.753178 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.753185 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.753192 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.753199 | orchestrator | 2025-04-14 00:56:49.753206 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-14 00:56:49.753213 | orchestrator | Monday 14 April 2025 00:54:35 +0000 (0:00:02.861) 0:05:42.169 ********** 2025-04-14 00:56:49.753220 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.753227 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.753233 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.753253 | orchestrator | 2025-04-14 00:56:49.753260 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-04-14 00:56:49.753267 | orchestrator | Monday 14 April 2025 00:54:38 +0000 (0:00:03.436) 0:05:45.606 ********** 2025-04-14 00:56:49.753278 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-04-14 00:56:49.753285 | orchestrator | 2025-04-14 00:56:49.753292 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-04-14 00:56:49.753299 | orchestrator | Monday 14 April 2025 00:54:40 +0000 (0:00:01.746) 0:05:47.352 ********** 2025-04-14 00:56:49.753306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-14 00:56:49.753314 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.753321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-14 00:56:49.753328 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.753345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-14 00:56:49.753353 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.753360 | orchestrator | 2025-04-14 00:56:49.753368 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-04-14 00:56:49.753375 | orchestrator | Monday 14 April 2025 00:54:42 +0000 (0:00:02.095) 0:05:49.448 ********** 2025-04-14 00:56:49.753382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-14 00:56:49.753389 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.753402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-14 00:56:49.753410 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.753417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-04-14 00:56:49.753428 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.753435 | orchestrator | 2025-04-14 00:56:49.753442 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-04-14 00:56:49.753449 | orchestrator | Monday 14 April 2025 00:54:44 +0000 (0:00:01.482) 0:05:50.931 ********** 2025-04-14 00:56:49.753456 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.753463 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.753470 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.753477 | orchestrator | 2025-04-14 00:56:49.753484 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-04-14 00:56:49.753491 | orchestrator | Monday 14 April 2025 00:54:46 +0000 (0:00:02.126) 0:05:53.057 ********** 2025-04-14 00:56:49.753498 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.753505 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.753511 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.753518 | orchestrator | 2025-04-14 00:56:49.753525 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-04-14 00:56:49.753535 | orchestrator | Monday 14 April 2025 00:54:49 +0000 (0:00:02.942) 0:05:55.999 ********** 2025-04-14 00:56:49.753542 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.753549 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.753556 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.753563 | orchestrator | 2025-04-14 00:56:49.753570 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-04-14 00:56:49.753577 | orchestrator | Monday 14 April 2025 00:54:52 +0000 (0:00:03.562) 0:05:59.562 ********** 2025-04-14 00:56:49.753584 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.753591 | orchestrator | 2025-04-14 00:56:49.753598 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-04-14 00:56:49.753605 | orchestrator | Monday 14 April 2025 00:54:54 +0000 (0:00:01.751) 0:06:01.314 ********** 2025-04-14 00:56:49.753622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.753630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-14 00:56:49.753638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.753670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.753677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-14 00:56:49.753695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.753728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.753735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-14 00:56:49.753743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.753785 | orchestrator | 2025-04-14 00:56:49.753793 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-04-14 00:56:49.753800 | orchestrator | Monday 14 April 2025 00:54:59 +0000 (0:00:04.858) 0:06:06.172 ********** 2025-04-14 00:56:49.753807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.753814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-14 00:56:49.753822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.753858 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.753871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.753879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-14 00:56:49.753886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.753908 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.753932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.753944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-04-14 00:56:49.753952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-04-14 00:56:49.753967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-04-14 00:56:49.753974 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.753981 | orchestrator | 2025-04-14 00:56:49.753988 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-04-14 00:56:49.753995 | orchestrator | Monday 14 April 2025 00:55:00 +0000 (0:00:01.123) 0:06:07.295 ********** 2025-04-14 00:56:49.754002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-14 00:56:49.754009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-14 00:56:49.754033 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.754042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-14 00:56:49.754057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-14 00:56:49.754064 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.754081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-14 00:56:49.754088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-04-14 00:56:49.754096 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.754103 | orchestrator | 2025-04-14 00:56:49.754110 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-04-14 00:56:49.754117 | orchestrator | Monday 14 April 2025 00:55:01 +0000 (0:00:01.485) 0:06:08.781 ********** 2025-04-14 00:56:49.754124 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.754131 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.754138 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.754145 | orchestrator | 2025-04-14 00:56:49.754152 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-04-14 00:56:49.754159 | orchestrator | Monday 14 April 2025 00:55:03 +0000 (0:00:01.529) 0:06:10.310 ********** 2025-04-14 00:56:49.754166 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.754173 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.754180 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.754187 | orchestrator | 2025-04-14 00:56:49.754194 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-04-14 00:56:49.754201 | orchestrator | Monday 14 April 2025 00:55:06 +0000 (0:00:02.637) 0:06:12.947 ********** 2025-04-14 00:56:49.754208 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.754215 | orchestrator | 2025-04-14 00:56:49.754221 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-04-14 00:56:49.754229 | orchestrator | Monday 14 April 2025 00:55:07 +0000 (0:00:01.788) 0:06:14.736 ********** 2025-04-14 00:56:49.754249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:56:49.754264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:56:49.754276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:56:49.754296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:56:49.754304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:56:49.754317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:56:49.754330 | orchestrator | 2025-04-14 00:56:49.754337 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-04-14 00:56:49.754344 | orchestrator | Monday 14 April 2025 00:55:14 +0000 (0:00:06.996) 0:06:21.732 ********** 2025-04-14 00:56:49.754361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:56:49.754369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:56:49.754383 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.754390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:56:49.754398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:56:49.754409 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.754417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:56:49.754440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:56:49.754449 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.754456 | orchestrator | 2025-04-14 00:56:49.754463 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-04-14 00:56:49.754470 | orchestrator | Monday 14 April 2025 00:55:15 +0000 (0:00:00.921) 0:06:22.653 ********** 2025-04-14 00:56:49.754477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-14 00:56:49.754484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-14 00:56:49.754491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-14 00:56:49.754499 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.754506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-14 00:56:49.754513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-14 00:56:49.754525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-14 00:56:49.754532 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.754542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-04-14 00:56:49.754549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-14 00:56:49.754556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-04-14 00:56:49.754563 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.754570 | orchestrator | 2025-04-14 00:56:49.754577 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-04-14 00:56:49.754584 | orchestrator | Monday 14 April 2025 00:55:17 +0000 (0:00:01.468) 0:06:24.122 ********** 2025-04-14 00:56:49.754591 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.754598 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.754605 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.754612 | orchestrator | 2025-04-14 00:56:49.754619 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-04-14 00:56:49.754626 | orchestrator | Monday 14 April 2025 00:55:17 +0000 (0:00:00.457) 0:06:24.580 ********** 2025-04-14 00:56:49.754632 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.754639 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.754646 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.754705 | orchestrator | 2025-04-14 00:56:49.754713 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-04-14 00:56:49.754720 | orchestrator | Monday 14 April 2025 00:55:19 +0000 (0:00:01.766) 0:06:26.346 ********** 2025-04-14 00:56:49.754737 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.754745 | orchestrator | 2025-04-14 00:56:49.754752 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-04-14 00:56:49.754759 | orchestrator | Monday 14 April 2025 00:55:21 +0000 (0:00:01.947) 0:06:28.294 ********** 2025-04-14 00:56:49.754766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-14 00:56:49.754774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 00:56:49.754785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.754808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-14 00:56:49.754828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 00:56:49.754836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.754863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-14 00:56:49.754870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 00:56:49.754877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.754910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-14 00:56:49.754922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 00:56:49.754929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.754967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.754975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-14 00:56:49.754986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 00:56:49.754994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.755019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-14 00:56:49.755039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 00:56:49.755047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.755073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755080 | orchestrator | 2025-04-14 00:56:49.755087 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-04-14 00:56:49.755098 | orchestrator | Monday 14 April 2025 00:55:26 +0000 (0:00:05.150) 0:06:33.444 ********** 2025-04-14 00:56:49.755106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 00:56:49.755113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 00:56:49.755120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.755146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 00:56:49.755157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 00:56:49.755164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.755186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755193 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.755204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 00:56:49.755215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 00:56:49.755222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.755254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 00:56:49.755262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 00:56:49.755276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 00:56:49.755284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 00:56:49.755291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.755334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.755342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 00:56:49.755350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 00:56:49.755364 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.755371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 00:56:49.755400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 00:56:49.755407 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.755415 | orchestrator | 2025-04-14 00:56:49.755422 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-04-14 00:56:49.755429 | orchestrator | Monday 14 April 2025 00:55:28 +0000 (0:00:01.637) 0:06:35.081 ********** 2025-04-14 00:56:49.755436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-14 00:56:49.755443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-14 00:56:49.755451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-14 00:56:49.755458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-14 00:56:49.755465 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.755472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-14 00:56:49.755479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-14 00:56:49.755486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-14 00:56:49.755494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-14 00:56:49.755505 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.755512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-04-14 00:56:49.755522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-04-14 00:56:49.755529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-14 00:56:49.755539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-04-14 00:56:49.755547 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.755557 | orchestrator | 2025-04-14 00:56:49.755564 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-04-14 00:56:49.755571 | orchestrator | Monday 14 April 2025 00:55:29 +0000 (0:00:01.675) 0:06:36.757 ********** 2025-04-14 00:56:49.755578 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.755585 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.755592 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.755599 | orchestrator | 2025-04-14 00:56:49.755606 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-04-14 00:56:49.755613 | orchestrator | Monday 14 April 2025 00:55:30 +0000 (0:00:00.856) 0:06:37.613 ********** 2025-04-14 00:56:49.755620 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.755627 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.755634 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.755641 | orchestrator | 2025-04-14 00:56:49.755648 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-04-14 00:56:49.755655 | orchestrator | Monday 14 April 2025 00:55:32 +0000 (0:00:02.141) 0:06:39.754 ********** 2025-04-14 00:56:49.755661 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.755668 | orchestrator | 2025-04-14 00:56:49.755675 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-04-14 00:56:49.755682 | orchestrator | Monday 14 April 2025 00:55:34 +0000 (0:00:01.917) 0:06:41.671 ********** 2025-04-14 00:56:49.755689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:56:49.755697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:56:49.755712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-04-14 00:56:49.755720 | orchestrator | 2025-04-14 00:56:49.755727 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-04-14 00:56:49.755734 | orchestrator | Monday 14 April 2025 00:55:37 +0000 (0:00:03.034) 0:06:44.706 ********** 2025-04-14 00:56:49.755741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-14 00:56:49.755748 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.755756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-14 00:56:49.755767 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.755774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-04-14 00:56:49.755781 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.755788 | orchestrator | 2025-04-14 00:56:49.755796 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-04-14 00:56:49.755803 | orchestrator | Monday 14 April 2025 00:55:38 +0000 (0:00:00.749) 0:06:45.456 ********** 2025-04-14 00:56:49.755810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-14 00:56:49.755817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-14 00:56:49.755824 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.755831 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.755841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-04-14 00:56:49.755848 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.755855 | orchestrator | 2025-04-14 00:56:49.755862 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-04-14 00:56:49.755869 | orchestrator | Monday 14 April 2025 00:55:39 +0000 (0:00:01.221) 0:06:46.678 ********** 2025-04-14 00:56:49.755876 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.755883 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.755890 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.755897 | orchestrator | 2025-04-14 00:56:49.755904 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-04-14 00:56:49.755911 | orchestrator | Monday 14 April 2025 00:55:40 +0000 (0:00:00.496) 0:06:47.174 ********** 2025-04-14 00:56:49.755918 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.755925 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.755932 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.755939 | orchestrator | 2025-04-14 00:56:49.755946 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-04-14 00:56:49.755953 | orchestrator | Monday 14 April 2025 00:55:42 +0000 (0:00:01.869) 0:06:49.044 ********** 2025-04-14 00:56:49.755959 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:56:49.755966 | orchestrator | 2025-04-14 00:56:49.755973 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-04-14 00:56:49.755980 | orchestrator | Monday 14 April 2025 00:55:44 +0000 (0:00:02.063) 0:06:51.108 ********** 2025-04-14 00:56:49.755988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.755999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.756006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.756017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.756025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.756036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-04-14 00:56:49.756044 | orchestrator | 2025-04-14 00:56:49.756051 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-04-14 00:56:49.756057 | orchestrator | Monday 14 April 2025 00:55:52 +0000 (0:00:07.884) 0:06:58.992 ********** 2025-04-14 00:56:49.756065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.756076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.756083 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.756090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.756109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.756117 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.756125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.756140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-04-14 00:56:49.756148 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.756155 | orchestrator | 2025-04-14 00:56:49.756162 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-04-14 00:56:49.756169 | orchestrator | Monday 14 April 2025 00:55:53 +0000 (0:00:01.194) 0:07:00.187 ********** 2025-04-14 00:56:49.756176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756233 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.756271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756278 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.756285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-04-14 00:56:49.756314 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.756321 | orchestrator | 2025-04-14 00:56:49.756328 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-04-14 00:56:49.756335 | orchestrator | Monday 14 April 2025 00:55:54 +0000 (0:00:01.463) 0:07:01.650 ********** 2025-04-14 00:56:49.756342 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.756349 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.756356 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.756363 | orchestrator | 2025-04-14 00:56:49.756370 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-04-14 00:56:49.756380 | orchestrator | Monday 14 April 2025 00:55:56 +0000 (0:00:01.472) 0:07:03.122 ********** 2025-04-14 00:56:49.756387 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.756394 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.756401 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.756408 | orchestrator | 2025-04-14 00:56:49.756415 | orchestrator | TASK [include_role : swift] **************************************************** 2025-04-14 00:56:49.756426 | orchestrator | Monday 14 April 2025 00:55:58 +0000 (0:00:02.652) 0:07:05.774 ********** 2025-04-14 00:56:49.756433 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.756440 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.756450 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.756457 | orchestrator | 2025-04-14 00:56:49.756464 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-04-14 00:56:49.756471 | orchestrator | Monday 14 April 2025 00:55:59 +0000 (0:00:00.333) 0:07:06.108 ********** 2025-04-14 00:56:49.756478 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.756485 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.756492 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.756499 | orchestrator | 2025-04-14 00:56:49.756506 | orchestrator | TASK [include_role : trove] **************************************************** 2025-04-14 00:56:49.756513 | orchestrator | Monday 14 April 2025 00:55:59 +0000 (0:00:00.605) 0:07:06.713 ********** 2025-04-14 00:56:49.756520 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.756527 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.756533 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.756540 | orchestrator | 2025-04-14 00:56:49.756547 | orchestrator | TASK [include_role : venus] **************************************************** 2025-04-14 00:56:49.756554 | orchestrator | Monday 14 April 2025 00:56:00 +0000 (0:00:00.586) 0:07:07.300 ********** 2025-04-14 00:56:49.756561 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.756568 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.756575 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.756582 | orchestrator | 2025-04-14 00:56:49.756589 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-04-14 00:56:49.756596 | orchestrator | Monday 14 April 2025 00:56:01 +0000 (0:00:00.613) 0:07:07.913 ********** 2025-04-14 00:56:49.756603 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.756610 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.756617 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.756623 | orchestrator | 2025-04-14 00:56:49.756630 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-04-14 00:56:49.756637 | orchestrator | Monday 14 April 2025 00:56:01 +0000 (0:00:00.315) 0:07:08.229 ********** 2025-04-14 00:56:49.756644 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.756651 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.756658 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.756665 | orchestrator | 2025-04-14 00:56:49.756672 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-04-14 00:56:49.756679 | orchestrator | Monday 14 April 2025 00:56:02 +0000 (0:00:01.074) 0:07:09.303 ********** 2025-04-14 00:56:49.756686 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.756693 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.756700 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.756707 | orchestrator | 2025-04-14 00:56:49.756714 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-04-14 00:56:49.756721 | orchestrator | Monday 14 April 2025 00:56:03 +0000 (0:00:00.934) 0:07:10.237 ********** 2025-04-14 00:56:49.756728 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.756739 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.756747 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.756754 | orchestrator | 2025-04-14 00:56:49.756761 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-04-14 00:56:49.756768 | orchestrator | Monday 14 April 2025 00:56:03 +0000 (0:00:00.351) 0:07:10.589 ********** 2025-04-14 00:56:49.756775 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.756781 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.756788 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.756794 | orchestrator | 2025-04-14 00:56:49.756801 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-04-14 00:56:49.756807 | orchestrator | Monday 14 April 2025 00:56:05 +0000 (0:00:01.340) 0:07:11.930 ********** 2025-04-14 00:56:49.756817 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.756823 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.756829 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.756835 | orchestrator | 2025-04-14 00:56:49.756841 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-04-14 00:56:49.756848 | orchestrator | Monday 14 April 2025 00:56:06 +0000 (0:00:01.251) 0:07:13.181 ********** 2025-04-14 00:56:49.756854 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.756860 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.756866 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.756872 | orchestrator | 2025-04-14 00:56:49.756878 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-04-14 00:56:49.756884 | orchestrator | Monday 14 April 2025 00:56:07 +0000 (0:00:00.953) 0:07:14.135 ********** 2025-04-14 00:56:49.756890 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.756896 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.756903 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.756909 | orchestrator | 2025-04-14 00:56:49.756915 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-04-14 00:56:49.756921 | orchestrator | Monday 14 April 2025 00:56:18 +0000 (0:00:10.747) 0:07:24.883 ********** 2025-04-14 00:56:49.756927 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.756933 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.756939 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.756945 | orchestrator | 2025-04-14 00:56:49.756951 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-04-14 00:56:49.756957 | orchestrator | Monday 14 April 2025 00:56:19 +0000 (0:00:01.083) 0:07:25.966 ********** 2025-04-14 00:56:49.756964 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.756970 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.756976 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.756982 | orchestrator | 2025-04-14 00:56:49.756988 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-04-14 00:56:49.756994 | orchestrator | Monday 14 April 2025 00:56:30 +0000 (0:00:11.673) 0:07:37.640 ********** 2025-04-14 00:56:49.757000 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.757006 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.757012 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.757018 | orchestrator | 2025-04-14 00:56:49.757024 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-04-14 00:56:49.757034 | orchestrator | Monday 14 April 2025 00:56:31 +0000 (0:00:00.749) 0:07:38.390 ********** 2025-04-14 00:56:49.757040 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:56:49.757046 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:56:49.757052 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:56:49.757059 | orchestrator | 2025-04-14 00:56:49.757067 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-04-14 00:56:49.757074 | orchestrator | Monday 14 April 2025 00:56:40 +0000 (0:00:08.691) 0:07:47.082 ********** 2025-04-14 00:56:49.757080 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.757086 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.757092 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.757099 | orchestrator | 2025-04-14 00:56:49.757105 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-04-14 00:56:49.757111 | orchestrator | Monday 14 April 2025 00:56:40 +0000 (0:00:00.645) 0:07:47.727 ********** 2025-04-14 00:56:49.757117 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.757123 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.757130 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.757136 | orchestrator | 2025-04-14 00:56:49.757142 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-04-14 00:56:49.757148 | orchestrator | Monday 14 April 2025 00:56:41 +0000 (0:00:00.630) 0:07:48.358 ********** 2025-04-14 00:56:49.757155 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.757161 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.757171 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.757178 | orchestrator | 2025-04-14 00:56:49.757184 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-04-14 00:56:49.757190 | orchestrator | Monday 14 April 2025 00:56:41 +0000 (0:00:00.394) 0:07:48.752 ********** 2025-04-14 00:56:49.757196 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.757202 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.757208 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.757215 | orchestrator | 2025-04-14 00:56:49.757221 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-04-14 00:56:49.757227 | orchestrator | Monday 14 April 2025 00:56:42 +0000 (0:00:00.651) 0:07:49.403 ********** 2025-04-14 00:56:49.757233 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.757248 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.757254 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.757260 | orchestrator | 2025-04-14 00:56:49.757266 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-04-14 00:56:49.757272 | orchestrator | Monday 14 April 2025 00:56:43 +0000 (0:00:00.635) 0:07:50.039 ********** 2025-04-14 00:56:49.757279 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:56:49.757285 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:56:49.757291 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:56:49.757297 | orchestrator | 2025-04-14 00:56:49.757303 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-04-14 00:56:49.757309 | orchestrator | Monday 14 April 2025 00:56:43 +0000 (0:00:00.347) 0:07:50.386 ********** 2025-04-14 00:56:49.757315 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.757321 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.757327 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.757334 | orchestrator | 2025-04-14 00:56:49.757340 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-04-14 00:56:49.757346 | orchestrator | Monday 14 April 2025 00:56:44 +0000 (0:00:01.267) 0:07:51.654 ********** 2025-04-14 00:56:49.757352 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:56:49.757358 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:56:49.757364 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:56:49.757374 | orchestrator | 2025-04-14 00:56:49.757380 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:56:49.757386 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-14 00:56:49.757392 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-14 00:56:49.757399 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-04-14 00:56:49.757405 | orchestrator | 2025-04-14 00:56:49.757411 | orchestrator | 2025-04-14 00:56:49.757417 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:56:49.757423 | orchestrator | Monday 14 April 2025 00:56:45 +0000 (0:00:01.151) 0:07:52.806 ********** 2025-04-14 00:56:49.757429 | orchestrator | =============================================================================== 2025-04-14 00:56:49.757435 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.67s 2025-04-14 00:56:49.757442 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.75s 2025-04-14 00:56:49.757448 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.69s 2025-04-14 00:56:49.757454 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.88s 2025-04-14 00:56:49.757460 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.16s 2025-04-14 00:56:49.757466 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.00s 2025-04-14 00:56:49.757472 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.71s 2025-04-14 00:56:49.757482 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 6.35s 2025-04-14 00:56:49.757488 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.13s 2025-04-14 00:56:49.757494 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 6.09s 2025-04-14 00:56:49.757500 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.07s 2025-04-14 00:56:49.757509 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.85s 2025-04-14 00:56:49.757515 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 5.74s 2025-04-14 00:56:49.757521 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 5.66s 2025-04-14 00:56:49.757530 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.64s 2025-04-14 00:56:52.781666 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.42s 2025-04-14 00:56:52.781808 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.33s 2025-04-14 00:56:52.781828 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.18s 2025-04-14 00:56:52.781844 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.15s 2025-04-14 00:56:52.781858 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.86s 2025-04-14 00:56:52.781872 | orchestrator | 2025-04-14 00:56:49 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:56:52.781888 | orchestrator | 2025-04-14 00:56:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:52.781922 | orchestrator | 2025-04-14 00:56:52 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:56:52.783474 | orchestrator | 2025-04-14 00:56:52 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:52.784067 | orchestrator | 2025-04-14 00:56:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:52.790124 | orchestrator | 2025-04-14 00:56:52 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:56:55.826765 | orchestrator | 2025-04-14 00:56:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:55.826911 | orchestrator | 2025-04-14 00:56:55 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:56:55.827072 | orchestrator | 2025-04-14 00:56:55 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:55.827103 | orchestrator | 2025-04-14 00:56:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:55.827770 | orchestrator | 2025-04-14 00:56:55 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:56:58.888551 | orchestrator | 2025-04-14 00:56:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:56:58.888722 | orchestrator | 2025-04-14 00:56:58 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:56:58.889184 | orchestrator | 2025-04-14 00:56:58 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:56:58.891066 | orchestrator | 2025-04-14 00:56:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:56:58.895293 | orchestrator | 2025-04-14 00:56:58 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:01.941609 | orchestrator | 2025-04-14 00:56:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:01.941773 | orchestrator | 2025-04-14 00:57:01 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:01.944051 | orchestrator | 2025-04-14 00:57:01 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:01.944102 | orchestrator | 2025-04-14 00:57:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:01.944135 | orchestrator | 2025-04-14 00:57:01 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:04.981200 | orchestrator | 2025-04-14 00:57:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:04.981319 | orchestrator | 2025-04-14 00:57:04 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:04.981575 | orchestrator | 2025-04-14 00:57:04 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:04.982508 | orchestrator | 2025-04-14 00:57:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:04.983406 | orchestrator | 2025-04-14 00:57:04 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:08.028732 | orchestrator | 2025-04-14 00:57:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:08.028898 | orchestrator | 2025-04-14 00:57:08 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:08.030252 | orchestrator | 2025-04-14 00:57:08 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:08.030940 | orchestrator | 2025-04-14 00:57:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:08.031734 | orchestrator | 2025-04-14 00:57:08 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:11.087154 | orchestrator | 2025-04-14 00:57:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:11.087340 | orchestrator | 2025-04-14 00:57:11 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:11.089190 | orchestrator | 2025-04-14 00:57:11 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:11.089254 | orchestrator | 2025-04-14 00:57:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:11.090132 | orchestrator | 2025-04-14 00:57:11 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:14.125723 | orchestrator | 2025-04-14 00:57:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:14.125893 | orchestrator | 2025-04-14 00:57:14 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:14.126077 | orchestrator | 2025-04-14 00:57:14 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:14.126716 | orchestrator | 2025-04-14 00:57:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:14.127410 | orchestrator | 2025-04-14 00:57:14 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:17.163190 | orchestrator | 2025-04-14 00:57:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:17.163392 | orchestrator | 2025-04-14 00:57:17 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:17.164514 | orchestrator | 2025-04-14 00:57:17 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:17.166588 | orchestrator | 2025-04-14 00:57:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:17.167093 | orchestrator | 2025-04-14 00:57:17 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:20.208552 | orchestrator | 2025-04-14 00:57:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:20.208738 | orchestrator | 2025-04-14 00:57:20 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:20.208921 | orchestrator | 2025-04-14 00:57:20 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:20.210685 | orchestrator | 2025-04-14 00:57:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:20.211530 | orchestrator | 2025-04-14 00:57:20 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:23.253164 | orchestrator | 2025-04-14 00:57:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:23.253338 | orchestrator | 2025-04-14 00:57:23 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:23.254877 | orchestrator | 2025-04-14 00:57:23 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:23.258181 | orchestrator | 2025-04-14 00:57:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:23.260960 | orchestrator | 2025-04-14 00:57:23 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:23.261958 | orchestrator | 2025-04-14 00:57:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:26.306876 | orchestrator | 2025-04-14 00:57:26 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:26.309527 | orchestrator | 2025-04-14 00:57:26 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:26.311377 | orchestrator | 2025-04-14 00:57:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:26.312897 | orchestrator | 2025-04-14 00:57:26 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:29.353729 | orchestrator | 2025-04-14 00:57:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:29.353877 | orchestrator | 2025-04-14 00:57:29 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:29.354492 | orchestrator | 2025-04-14 00:57:29 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:29.355781 | orchestrator | 2025-04-14 00:57:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:29.356948 | orchestrator | 2025-04-14 00:57:29 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:32.404065 | orchestrator | 2025-04-14 00:57:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:32.404244 | orchestrator | 2025-04-14 00:57:32 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:32.405476 | orchestrator | 2025-04-14 00:57:32 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:32.406881 | orchestrator | 2025-04-14 00:57:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:32.408288 | orchestrator | 2025-04-14 00:57:32 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:35.465764 | orchestrator | 2025-04-14 00:57:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:35.465927 | orchestrator | 2025-04-14 00:57:35 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:35.469997 | orchestrator | 2025-04-14 00:57:35 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:35.471924 | orchestrator | 2025-04-14 00:57:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:35.471997 | orchestrator | 2025-04-14 00:57:35 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:35.472083 | orchestrator | 2025-04-14 00:57:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:38.519861 | orchestrator | 2025-04-14 00:57:38 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:38.523089 | orchestrator | 2025-04-14 00:57:38 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:38.525378 | orchestrator | 2025-04-14 00:57:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:38.531050 | orchestrator | 2025-04-14 00:57:38 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:41.585388 | orchestrator | 2025-04-14 00:57:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:41.585586 | orchestrator | 2025-04-14 00:57:41 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:41.585680 | orchestrator | 2025-04-14 00:57:41 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:41.586642 | orchestrator | 2025-04-14 00:57:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:41.587509 | orchestrator | 2025-04-14 00:57:41 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:44.640826 | orchestrator | 2025-04-14 00:57:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:44.640934 | orchestrator | 2025-04-14 00:57:44 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:44.642979 | orchestrator | 2025-04-14 00:57:44 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:44.645800 | orchestrator | 2025-04-14 00:57:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:44.647024 | orchestrator | 2025-04-14 00:57:44 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:44.647323 | orchestrator | 2025-04-14 00:57:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:47.685807 | orchestrator | 2025-04-14 00:57:47 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:47.687794 | orchestrator | 2025-04-14 00:57:47 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:47.689150 | orchestrator | 2025-04-14 00:57:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:47.691520 | orchestrator | 2025-04-14 00:57:47 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:50.743766 | orchestrator | 2025-04-14 00:57:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:50.743904 | orchestrator | 2025-04-14 00:57:50 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:50.744311 | orchestrator | 2025-04-14 00:57:50 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:50.746115 | orchestrator | 2025-04-14 00:57:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:50.747660 | orchestrator | 2025-04-14 00:57:50 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:53.791794 | orchestrator | 2025-04-14 00:57:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:53.791931 | orchestrator | 2025-04-14 00:57:53 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:53.794949 | orchestrator | 2025-04-14 00:57:53 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:53.797598 | orchestrator | 2025-04-14 00:57:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:53.800103 | orchestrator | 2025-04-14 00:57:53 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:56.861797 | orchestrator | 2025-04-14 00:57:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:56.861936 | orchestrator | 2025-04-14 00:57:56 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:56.863329 | orchestrator | 2025-04-14 00:57:56 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:56.866324 | orchestrator | 2025-04-14 00:57:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:56.867625 | orchestrator | 2025-04-14 00:57:56 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:57:56.868072 | orchestrator | 2025-04-14 00:57:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:57:59.926265 | orchestrator | 2025-04-14 00:57:59 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:57:59.927471 | orchestrator | 2025-04-14 00:57:59 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:57:59.927512 | orchestrator | 2025-04-14 00:57:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:57:59.927533 | orchestrator | 2025-04-14 00:57:59 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:02.993130 | orchestrator | 2025-04-14 00:57:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:02.993336 | orchestrator | 2025-04-14 00:58:02 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:02.994369 | orchestrator | 2025-04-14 00:58:02 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:02.996163 | orchestrator | 2025-04-14 00:58:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:02.997480 | orchestrator | 2025-04-14 00:58:02 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:06.051451 | orchestrator | 2025-04-14 00:58:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:06.051586 | orchestrator | 2025-04-14 00:58:06 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:06.053396 | orchestrator | 2025-04-14 00:58:06 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:06.054563 | orchestrator | 2025-04-14 00:58:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:06.054599 | orchestrator | 2025-04-14 00:58:06 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:06.054977 | orchestrator | 2025-04-14 00:58:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:09.113654 | orchestrator | 2025-04-14 00:58:09 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:09.116448 | orchestrator | 2025-04-14 00:58:09 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:09.118946 | orchestrator | 2025-04-14 00:58:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:09.121972 | orchestrator | 2025-04-14 00:58:09 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:12.179406 | orchestrator | 2025-04-14 00:58:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:12.179565 | orchestrator | 2025-04-14 00:58:12 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:12.183050 | orchestrator | 2025-04-14 00:58:12 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:12.183812 | orchestrator | 2025-04-14 00:58:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:12.187762 | orchestrator | 2025-04-14 00:58:12 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:15.227382 | orchestrator | 2025-04-14 00:58:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:15.227516 | orchestrator | 2025-04-14 00:58:15 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:15.227596 | orchestrator | 2025-04-14 00:58:15 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:15.227620 | orchestrator | 2025-04-14 00:58:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:15.228329 | orchestrator | 2025-04-14 00:58:15 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:15.228471 | orchestrator | 2025-04-14 00:58:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:18.291893 | orchestrator | 2025-04-14 00:58:18 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:18.294490 | orchestrator | 2025-04-14 00:58:18 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:18.297677 | orchestrator | 2025-04-14 00:58:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:18.301127 | orchestrator | 2025-04-14 00:58:18 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:21.366830 | orchestrator | 2025-04-14 00:58:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:21.366973 | orchestrator | 2025-04-14 00:58:21 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:21.370998 | orchestrator | 2025-04-14 00:58:21 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:21.372972 | orchestrator | 2025-04-14 00:58:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:21.375135 | orchestrator | 2025-04-14 00:58:21 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:21.375764 | orchestrator | 2025-04-14 00:58:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:24.430104 | orchestrator | 2025-04-14 00:58:24 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:24.431355 | orchestrator | 2025-04-14 00:58:24 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:24.431414 | orchestrator | 2025-04-14 00:58:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:24.431441 | orchestrator | 2025-04-14 00:58:24 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:27.474765 | orchestrator | 2025-04-14 00:58:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:27.474907 | orchestrator | 2025-04-14 00:58:27 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:27.476060 | orchestrator | 2025-04-14 00:58:27 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:27.477921 | orchestrator | 2025-04-14 00:58:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:27.479503 | orchestrator | 2025-04-14 00:58:27 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:30.530302 | orchestrator | 2025-04-14 00:58:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:30.530446 | orchestrator | 2025-04-14 00:58:30 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:30.530689 | orchestrator | 2025-04-14 00:58:30 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:30.531811 | orchestrator | 2025-04-14 00:58:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:30.533270 | orchestrator | 2025-04-14 00:58:30 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:33.583741 | orchestrator | 2025-04-14 00:58:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:33.583870 | orchestrator | 2025-04-14 00:58:33 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:33.584775 | orchestrator | 2025-04-14 00:58:33 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:33.585917 | orchestrator | 2025-04-14 00:58:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:33.587107 | orchestrator | 2025-04-14 00:58:33 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:36.637053 | orchestrator | 2025-04-14 00:58:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:36.637248 | orchestrator | 2025-04-14 00:58:36 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:36.637600 | orchestrator | 2025-04-14 00:58:36 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:36.639743 | orchestrator | 2025-04-14 00:58:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:36.641346 | orchestrator | 2025-04-14 00:58:36 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:39.698081 | orchestrator | 2025-04-14 00:58:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:39.698320 | orchestrator | 2025-04-14 00:58:39 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:39.699372 | orchestrator | 2025-04-14 00:58:39 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:39.701660 | orchestrator | 2025-04-14 00:58:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:39.703123 | orchestrator | 2025-04-14 00:58:39 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:42.763530 | orchestrator | 2025-04-14 00:58:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:42.763659 | orchestrator | 2025-04-14 00:58:42 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:42.763959 | orchestrator | 2025-04-14 00:58:42 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:42.764677 | orchestrator | 2025-04-14 00:58:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:42.765625 | orchestrator | 2025-04-14 00:58:42 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:45.816406 | orchestrator | 2025-04-14 00:58:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:45.816551 | orchestrator | 2025-04-14 00:58:45 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:45.819253 | orchestrator | 2025-04-14 00:58:45 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:45.822159 | orchestrator | 2025-04-14 00:58:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:45.825996 | orchestrator | 2025-04-14 00:58:45 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:45.826274 | orchestrator | 2025-04-14 00:58:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:48.878701 | orchestrator | 2025-04-14 00:58:48 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:48.880933 | orchestrator | 2025-04-14 00:58:48 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:48.882337 | orchestrator | 2025-04-14 00:58:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:48.883861 | orchestrator | 2025-04-14 00:58:48 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:51.936507 | orchestrator | 2025-04-14 00:58:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:51.936646 | orchestrator | 2025-04-14 00:58:51 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:51.939861 | orchestrator | 2025-04-14 00:58:51 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:51.941912 | orchestrator | 2025-04-14 00:58:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:51.943324 | orchestrator | 2025-04-14 00:58:51 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state STARTED 2025-04-14 00:58:55.000866 | orchestrator | 2025-04-14 00:58:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:55.001002 | orchestrator | 2025-04-14 00:58:55 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:55.001945 | orchestrator | 2025-04-14 00:58:55 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:55.005258 | orchestrator | 2025-04-14 00:58:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:58:55.006467 | orchestrator | 2025-04-14 00:58:55.006536 | orchestrator | 2025-04-14 00:58:55 | INFO  | Task 6badc9c4-77e9-4c0e-a8e1-e88fb97dc411 is in state SUCCESS 2025-04-14 00:58:55.007103 | orchestrator | 2025-04-14 00:58:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:58:55.008742 | orchestrator | 2025-04-14 00:58:55.008781 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 00:58:55.008797 | orchestrator | 2025-04-14 00:58:55.008812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 00:58:55.008826 | orchestrator | Monday 14 April 2025 00:56:49 +0000 (0:00:00.331) 0:00:00.331 ********** 2025-04-14 00:58:55.008840 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:58:55.008856 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:58:55.008870 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:58:55.008884 | orchestrator | 2025-04-14 00:58:55.008899 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 00:58:55.008913 | orchestrator | Monday 14 April 2025 00:56:50 +0000 (0:00:00.454) 0:00:00.785 ********** 2025-04-14 00:58:55.008928 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-04-14 00:58:55.008942 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-04-14 00:58:55.008956 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-04-14 00:58:55.008970 | orchestrator | 2025-04-14 00:58:55.008984 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-04-14 00:58:55.008998 | orchestrator | 2025-04-14 00:58:55.009012 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-14 00:58:55.009051 | orchestrator | Monday 14 April 2025 00:56:50 +0000 (0:00:00.317) 0:00:01.103 ********** 2025-04-14 00:58:55.009066 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:58:55.009080 | orchestrator | 2025-04-14 00:58:55.009094 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-04-14 00:58:55.009108 | orchestrator | Monday 14 April 2025 00:56:51 +0000 (0:00:00.728) 0:00:01.831 ********** 2025-04-14 00:58:55.009151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-14 00:58:55.009165 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-14 00:58:55.009180 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-04-14 00:58:55.009194 | orchestrator | 2025-04-14 00:58:55.009208 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-04-14 00:58:55.009222 | orchestrator | Monday 14 April 2025 00:56:52 +0000 (0:00:00.770) 0:00:02.602 ********** 2025-04-14 00:58:55.009241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.009297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.009326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.009343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.009369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.009399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.009416 | orchestrator | 2025-04-14 00:58:55.009432 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-14 00:58:55.009448 | orchestrator | Monday 14 April 2025 00:56:53 +0000 (0:00:01.443) 0:00:04.046 ********** 2025-04-14 00:58:55.009464 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:58:55.009480 | orchestrator | 2025-04-14 00:58:55.009496 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-04-14 00:58:55.009511 | orchestrator | Monday 14 April 2025 00:56:54 +0000 (0:00:00.765) 0:00:04.811 ********** 2025-04-14 00:58:55.009536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.009560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.009577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.009604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.009627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.009649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.009673 | orchestrator | 2025-04-14 00:58:55.009688 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-04-14 00:58:55.009703 | orchestrator | Monday 14 April 2025 00:56:57 +0000 (0:00:03.096) 0:00:07.908 ********** 2025-04-14 00:58:55.009718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:58:55.009732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:58:55.009747 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:58:55.009770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:58:55.009792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:58:55.009816 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:58:55.009831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:58:55.009847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:58:55.009861 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:58:55.009876 | orchestrator | 2025-04-14 00:58:55.009890 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-04-14 00:58:55.009910 | orchestrator | Monday 14 April 2025 00:56:58 +0000 (0:00:01.126) 0:00:09.035 ********** 2025-04-14 00:58:55.009931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:58:55.009954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:58:55.009979 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:58:55.009994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:58:55.010009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:58:55.010095 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:58:55.010154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-04-14 00:58:55.010194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-04-14 00:58:55.010244 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:58:55.010269 | orchestrator | 2025-04-14 00:58:55.010293 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-04-14 00:58:55.010316 | orchestrator | Monday 14 April 2025 00:56:59 +0000 (0:00:01.242) 0:00:10.277 ********** 2025-04-14 00:58:55.010340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.010356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.010371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.010403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.010431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.010447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.010461 | orchestrator | 2025-04-14 00:58:55.010476 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-04-14 00:58:55.010497 | orchestrator | Monday 14 April 2025 00:57:02 +0000 (0:00:02.697) 0:00:12.975 ********** 2025-04-14 00:58:55.010511 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:58:55.010525 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:58:55.010539 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:58:55.010553 | orchestrator | 2025-04-14 00:58:55.010567 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-04-14 00:58:55.010581 | orchestrator | Monday 14 April 2025 00:57:06 +0000 (0:00:04.005) 0:00:16.980 ********** 2025-04-14 00:58:55.010595 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:58:55.010609 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:58:55.010623 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:58:55.010638 | orchestrator | 2025-04-14 00:58:55.010652 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-04-14 00:58:55.010666 | orchestrator | Monday 14 April 2025 00:57:08 +0000 (0:00:01.893) 0:00:18.874 ********** 2025-04-14 00:58:55.010687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.010703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.010718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-04-14 00:58:55.010742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.010772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.010788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-04-14 00:58:55.010811 | orchestrator | 2025-04-14 00:58:55.010826 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-14 00:58:55.010840 | orchestrator | Monday 14 April 2025 00:57:11 +0000 (0:00:02.625) 0:00:21.499 ********** 2025-04-14 00:58:55.010854 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:58:55.010869 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:58:55.010883 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:58:55.010897 | orchestrator | 2025-04-14 00:58:55.010912 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-14 00:58:55.010926 | orchestrator | Monday 14 April 2025 00:57:11 +0000 (0:00:00.618) 0:00:22.117 ********** 2025-04-14 00:58:55.010940 | orchestrator | 2025-04-14 00:58:55.010954 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-14 00:58:55.010968 | orchestrator | Monday 14 April 2025 00:57:12 +0000 (0:00:00.317) 0:00:22.435 ********** 2025-04-14 00:58:55.010982 | orchestrator | 2025-04-14 00:58:55.010996 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-04-14 00:58:55.011010 | orchestrator | Monday 14 April 2025 00:57:12 +0000 (0:00:00.075) 0:00:22.510 ********** 2025-04-14 00:58:55.011024 | orchestrator | 2025-04-14 00:58:55.011038 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-04-14 00:58:55.011059 | orchestrator | Monday 14 April 2025 00:57:12 +0000 (0:00:00.080) 0:00:22.591 ********** 2025-04-14 00:58:55.011073 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:58:55.011087 | orchestrator | 2025-04-14 00:58:55.011101 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-04-14 00:58:55.011145 | orchestrator | Monday 14 April 2025 00:57:12 +0000 (0:00:00.214) 0:00:22.805 ********** 2025-04-14 00:58:55.011161 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:58:55.011176 | orchestrator | 2025-04-14 00:58:55.011190 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-04-14 00:58:55.011204 | orchestrator | Monday 14 April 2025 00:57:12 +0000 (0:00:00.513) 0:00:23.319 ********** 2025-04-14 00:58:55.011217 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:58:55.011231 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:58:55.011245 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:58:55.011259 | orchestrator | 2025-04-14 00:58:55.011273 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-04-14 00:58:55.011287 | orchestrator | Monday 14 April 2025 00:57:45 +0000 (0:00:32.193) 0:00:55.513 ********** 2025-04-14 00:58:55.011301 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:58:55.011315 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:58:55.011332 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:58:55.011346 | orchestrator | 2025-04-14 00:58:55.011360 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-04-14 00:58:55.011375 | orchestrator | Monday 14 April 2025 00:58:41 +0000 (0:00:56.617) 0:01:52.130 ********** 2025-04-14 00:58:55.011390 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:58:55.011404 | orchestrator | 2025-04-14 00:58:55.011418 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-04-14 00:58:55.011432 | orchestrator | Monday 14 April 2025 00:58:42 +0000 (0:00:00.872) 0:01:53.003 ********** 2025-04-14 00:58:55.011446 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:58:55.011460 | orchestrator | 2025-04-14 00:58:55.011474 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-04-14 00:58:55.011488 | orchestrator | Monday 14 April 2025 00:58:45 +0000 (0:00:02.722) 0:01:55.726 ********** 2025-04-14 00:58:55.011502 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:58:55.011516 | orchestrator | 2025-04-14 00:58:55.011530 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-04-14 00:58:55.011549 | orchestrator | Monday 14 April 2025 00:58:47 +0000 (0:00:02.597) 0:01:58.323 ********** 2025-04-14 00:58:55.011563 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:58:55.011578 | orchestrator | 2025-04-14 00:58:55.011591 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-04-14 00:58:55.011606 | orchestrator | Monday 14 April 2025 00:58:51 +0000 (0:00:03.107) 0:02:01.430 ********** 2025-04-14 00:58:55.011620 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:58:55.011634 | orchestrator | 2025-04-14 00:58:55.011654 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:58:58.058550 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 00:58:58.058692 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:58:58.058714 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-04-14 00:58:58.058729 | orchestrator | 2025-04-14 00:58:58.058744 | orchestrator | 2025-04-14 00:58:58.058758 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:58:58.058774 | orchestrator | Monday 14 April 2025 00:58:53 +0000 (0:00:02.824) 0:02:04.255 ********** 2025-04-14 00:58:58.058819 | orchestrator | =============================================================================== 2025-04-14 00:58:58.058834 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 56.62s 2025-04-14 00:58:58.058848 | orchestrator | opensearch : Restart opensearch container ------------------------------ 32.19s 2025-04-14 00:58:58.058862 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.01s 2025-04-14 00:58:58.058876 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.11s 2025-04-14 00:58:58.058891 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.10s 2025-04-14 00:58:58.058905 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.82s 2025-04-14 00:58:58.058919 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.72s 2025-04-14 00:58:58.058933 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.70s 2025-04-14 00:58:58.058947 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.63s 2025-04-14 00:58:58.058968 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.60s 2025-04-14 00:58:58.058992 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.89s 2025-04-14 00:58:58.059017 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.44s 2025-04-14 00:58:58.059041 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.24s 2025-04-14 00:58:58.059064 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.13s 2025-04-14 00:58:58.059081 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.87s 2025-04-14 00:58:58.059097 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.77s 2025-04-14 00:58:58.059143 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.77s 2025-04-14 00:58:58.059160 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2025-04-14 00:58:58.059176 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2025-04-14 00:58:58.059192 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.51s 2025-04-14 00:58:58.059227 | orchestrator | 2025-04-14 00:58:58 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:58:58.059466 | orchestrator | 2025-04-14 00:58:58 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:58:58.059577 | orchestrator | 2025-04-14 00:58:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:01.119493 | orchestrator | 2025-04-14 00:58:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:01.119635 | orchestrator | 2025-04-14 00:59:01 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:04.168962 | orchestrator | 2025-04-14 00:59:01 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:04.169081 | orchestrator | 2025-04-14 00:59:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:04.169099 | orchestrator | 2025-04-14 00:59:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:04.169204 | orchestrator | 2025-04-14 00:59:04 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:04.170394 | orchestrator | 2025-04-14 00:59:04 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:04.171239 | orchestrator | 2025-04-14 00:59:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:07.227906 | orchestrator | 2025-04-14 00:59:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:07.228035 | orchestrator | 2025-04-14 00:59:07 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:07.229096 | orchestrator | 2025-04-14 00:59:07 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:07.231314 | orchestrator | 2025-04-14 00:59:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:10.282082 | orchestrator | 2025-04-14 00:59:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:10.282271 | orchestrator | 2025-04-14 00:59:10 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:10.283721 | orchestrator | 2025-04-14 00:59:10 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:10.285453 | orchestrator | 2025-04-14 00:59:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:13.350612 | orchestrator | 2025-04-14 00:59:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:13.350775 | orchestrator | 2025-04-14 00:59:13 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:13.352555 | orchestrator | 2025-04-14 00:59:13 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:13.354503 | orchestrator | 2025-04-14 00:59:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:13.354847 | orchestrator | 2025-04-14 00:59:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:16.406967 | orchestrator | 2025-04-14 00:59:16 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:16.408366 | orchestrator | 2025-04-14 00:59:16 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:16.409958 | orchestrator | 2025-04-14 00:59:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:19.464762 | orchestrator | 2025-04-14 00:59:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:19.464904 | orchestrator | 2025-04-14 00:59:19 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:19.465378 | orchestrator | 2025-04-14 00:59:19 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:19.468982 | orchestrator | 2025-04-14 00:59:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:22.521541 | orchestrator | 2025-04-14 00:59:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:22.521677 | orchestrator | 2025-04-14 00:59:22 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:22.523784 | orchestrator | 2025-04-14 00:59:22 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:22.526477 | orchestrator | 2025-04-14 00:59:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:25.577274 | orchestrator | 2025-04-14 00:59:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:25.577411 | orchestrator | 2025-04-14 00:59:25 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:25.579731 | orchestrator | 2025-04-14 00:59:25 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:25.580450 | orchestrator | 2025-04-14 00:59:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:28.627705 | orchestrator | 2025-04-14 00:59:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:28.627843 | orchestrator | 2025-04-14 00:59:28 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:28.630299 | orchestrator | 2025-04-14 00:59:28 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:28.631321 | orchestrator | 2025-04-14 00:59:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:31.689557 | orchestrator | 2025-04-14 00:59:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:31.689698 | orchestrator | 2025-04-14 00:59:31 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:31.690821 | orchestrator | 2025-04-14 00:59:31 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:31.693260 | orchestrator | 2025-04-14 00:59:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:34.742681 | orchestrator | 2025-04-14 00:59:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:34.742793 | orchestrator | 2025-04-14 00:59:34 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:34.744297 | orchestrator | 2025-04-14 00:59:34 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:34.745960 | orchestrator | 2025-04-14 00:59:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:34.746169 | orchestrator | 2025-04-14 00:59:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:37.799574 | orchestrator | 2025-04-14 00:59:37 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:37.800443 | orchestrator | 2025-04-14 00:59:37 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:37.801727 | orchestrator | 2025-04-14 00:59:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:40.852960 | orchestrator | 2025-04-14 00:59:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:40.853164 | orchestrator | 2025-04-14 00:59:40 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:40.854187 | orchestrator | 2025-04-14 00:59:40 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:40.854801 | orchestrator | 2025-04-14 00:59:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:43.899336 | orchestrator | 2025-04-14 00:59:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:43.899501 | orchestrator | 2025-04-14 00:59:43 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:43.902262 | orchestrator | 2025-04-14 00:59:43 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:43.903977 | orchestrator | 2025-04-14 00:59:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:43.904214 | orchestrator | 2025-04-14 00:59:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:46.948791 | orchestrator | 2025-04-14 00:59:46 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:46.949913 | orchestrator | 2025-04-14 00:59:46 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:46.951321 | orchestrator | 2025-04-14 00:59:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:49.990726 | orchestrator | 2025-04-14 00:59:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:49.990888 | orchestrator | 2025-04-14 00:59:49 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:49.992028 | orchestrator | 2025-04-14 00:59:49 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:49.994374 | orchestrator | 2025-04-14 00:59:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:49.994472 | orchestrator | 2025-04-14 00:59:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:53.049719 | orchestrator | 2025-04-14 00:59:53 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:53.051565 | orchestrator | 2025-04-14 00:59:53 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state STARTED 2025-04-14 00:59:53.054141 | orchestrator | 2025-04-14 00:59:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:53.054536 | orchestrator | 2025-04-14 00:59:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:56.121187 | orchestrator | 2025-04-14 00:59:56 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:56.127428 | orchestrator | 2025-04-14 00:59:56 | INFO  | Task d443319d-8407-47fc-b0b6-f7870c4a1069 is in state SUCCESS 2025-04-14 00:59:56.129573 | orchestrator | 2025-04-14 00:59:56.129641 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-14 00:59:56.129658 | orchestrator | 2025-04-14 00:59:56.129672 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-04-14 00:59:56.129687 | orchestrator | 2025-04-14 00:59:56.129701 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-14 00:59:56.129715 | orchestrator | Monday 14 April 2025 00:46:15 +0000 (0:00:02.138) 0:00:02.138 ********** 2025-04-14 00:59:56.129730 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.129745 | orchestrator | 2025-04-14 00:59:56.129759 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-14 00:59:56.129793 | orchestrator | Monday 14 April 2025 00:46:16 +0000 (0:00:01.590) 0:00:03.729 ********** 2025-04-14 00:59:56.129809 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 00:59:56.129824 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-14 00:59:56.129838 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-14 00:59:56.129852 | orchestrator | 2025-04-14 00:59:56.129866 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-14 00:59:56.129880 | orchestrator | Monday 14 April 2025 00:46:17 +0000 (0:00:00.842) 0:00:04.572 ********** 2025-04-14 00:59:56.129895 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.129910 | orchestrator | 2025-04-14 00:59:56.129924 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-14 00:59:56.129937 | orchestrator | Monday 14 April 2025 00:46:19 +0000 (0:00:01.580) 0:00:06.152 ********** 2025-04-14 00:59:56.129951 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.129967 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.129981 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.129995 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.130009 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.130098 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.130116 | orchestrator | 2025-04-14 00:59:56.130132 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-14 00:59:56.130148 | orchestrator | Monday 14 April 2025 00:46:20 +0000 (0:00:01.611) 0:00:07.764 ********** 2025-04-14 00:59:56.130422 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.130443 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.130458 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.130472 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.130486 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.130526 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.130541 | orchestrator | 2025-04-14 00:59:56.130555 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-14 00:59:56.130569 | orchestrator | Monday 14 April 2025 00:46:22 +0000 (0:00:01.215) 0:00:08.979 ********** 2025-04-14 00:59:56.130582 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.130596 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.130610 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.130624 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.130638 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.130652 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.130666 | orchestrator | 2025-04-14 00:59:56.130680 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-14 00:59:56.130694 | orchestrator | Monday 14 April 2025 00:46:23 +0000 (0:00:01.684) 0:00:10.663 ********** 2025-04-14 00:59:56.130707 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.130721 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.130735 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.130757 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.130771 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.130785 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.130799 | orchestrator | 2025-04-14 00:59:56.130813 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-14 00:59:56.130827 | orchestrator | Monday 14 April 2025 00:46:25 +0000 (0:00:01.568) 0:00:12.232 ********** 2025-04-14 00:59:56.130841 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.130870 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.130885 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.130899 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.130913 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.130926 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.130979 | orchestrator | 2025-04-14 00:59:56.130995 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-14 00:59:56.131009 | orchestrator | Monday 14 April 2025 00:46:26 +0000 (0:00:01.528) 0:00:13.761 ********** 2025-04-14 00:59:56.131023 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.131063 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.131089 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.131103 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.131117 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.131131 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.131145 | orchestrator | 2025-04-14 00:59:56.131169 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-14 00:59:56.131192 | orchestrator | Monday 14 April 2025 00:46:28 +0000 (0:00:01.625) 0:00:15.386 ********** 2025-04-14 00:59:56.131216 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.131240 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.131263 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.131285 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.131307 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.131331 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.131539 | orchestrator | 2025-04-14 00:59:56.131559 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-14 00:59:56.131573 | orchestrator | Monday 14 April 2025 00:46:29 +0000 (0:00:01.227) 0:00:16.614 ********** 2025-04-14 00:59:56.131588 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.131748 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.131766 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.131780 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.131794 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.131807 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.131821 | orchestrator | 2025-04-14 00:59:56.131851 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-14 00:59:56.131866 | orchestrator | Monday 14 April 2025 00:46:31 +0000 (0:00:01.547) 0:00:18.162 ********** 2025-04-14 00:59:56.131880 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 00:59:56.131908 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 00:59:56.131922 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 00:59:56.131936 | orchestrator | 2025-04-14 00:59:56.131965 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-14 00:59:56.131980 | orchestrator | Monday 14 April 2025 00:46:32 +0000 (0:00:01.206) 0:00:19.368 ********** 2025-04-14 00:59:56.131994 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.132008 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.132022 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.132063 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.132084 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.132098 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.132112 | orchestrator | 2025-04-14 00:59:56.132126 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-14 00:59:56.132141 | orchestrator | Monday 14 April 2025 00:46:35 +0000 (0:00:02.747) 0:00:22.115 ********** 2025-04-14 00:59:56.132154 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 00:59:56.132168 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 00:59:56.132182 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 00:59:56.132196 | orchestrator | 2025-04-14 00:59:56.132303 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-14 00:59:56.132318 | orchestrator | Monday 14 April 2025 00:46:39 +0000 (0:00:03.978) 0:00:26.094 ********** 2025-04-14 00:59:56.132333 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.132347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.132361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.132405 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.132422 | orchestrator | 2025-04-14 00:59:56.132436 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-14 00:59:56.132457 | orchestrator | Monday 14 April 2025 00:46:40 +0000 (0:00:00.876) 0:00:26.970 ********** 2025-04-14 00:59:56.132473 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132490 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132504 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132519 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.132533 | orchestrator | 2025-04-14 00:59:56.132547 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-14 00:59:56.132560 | orchestrator | Monday 14 April 2025 00:46:41 +0000 (0:00:01.245) 0:00:28.215 ********** 2025-04-14 00:59:56.132576 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132592 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132615 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132631 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.132655 | orchestrator | 2025-04-14 00:59:56.132679 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-14 00:59:56.132712 | orchestrator | Monday 14 April 2025 00:46:41 +0000 (0:00:00.212) 0:00:28.428 ********** 2025-04-14 00:59:56.132739 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-14 00:46:36.770513', 'end': '2025-04-14 00:46:37.035496', 'delta': '0:00:00.264983', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-14 00:46:37.799105', 'end': '2025-04-14 00:46:38.062168', 'delta': '0:00:00.263063', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132788 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-14 00:46:38.715329', 'end': '2025-04-14 00:46:38.947814', 'delta': '0:00:00.232485', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-14 00:59:56.132803 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.132817 | orchestrator | 2025-04-14 00:59:56.132831 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-14 00:59:56.132850 | orchestrator | Monday 14 April 2025 00:46:41 +0000 (0:00:00.207) 0:00:28.635 ********** 2025-04-14 00:59:56.132873 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.132894 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.132916 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.132940 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.132959 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.132973 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.132987 | orchestrator | 2025-04-14 00:59:56.133001 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-14 00:59:56.133015 | orchestrator | Monday 14 April 2025 00:46:44 +0000 (0:00:03.188) 0:00:31.823 ********** 2025-04-14 00:59:56.133072 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.133087 | orchestrator | 2025-04-14 00:59:56.133101 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-14 00:59:56.133115 | orchestrator | Monday 14 April 2025 00:46:45 +0000 (0:00:00.836) 0:00:32.660 ********** 2025-04-14 00:59:56.133129 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.133143 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.133157 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.133170 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.133184 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.133198 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.133211 | orchestrator | 2025-04-14 00:59:56.133225 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-14 00:59:56.133239 | orchestrator | Monday 14 April 2025 00:46:46 +0000 (0:00:00.867) 0:00:33.527 ********** 2025-04-14 00:59:56.133253 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.133273 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.133288 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.133301 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.133315 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.133328 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.133342 | orchestrator | 2025-04-14 00:59:56.133356 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-14 00:59:56.133411 | orchestrator | Monday 14 April 2025 00:46:48 +0000 (0:00:02.092) 0:00:35.620 ********** 2025-04-14 00:59:56.133426 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.133440 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.133454 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.133468 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.133482 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.133495 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.133509 | orchestrator | 2025-04-14 00:59:56.133523 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-14 00:59:56.133537 | orchestrator | Monday 14 April 2025 00:46:49 +0000 (0:00:00.904) 0:00:36.524 ********** 2025-04-14 00:59:56.133560 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.133575 | orchestrator | 2025-04-14 00:59:56.133589 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-14 00:59:56.133603 | orchestrator | Monday 14 April 2025 00:46:50 +0000 (0:00:00.371) 0:00:36.896 ********** 2025-04-14 00:59:56.133617 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.133631 | orchestrator | 2025-04-14 00:59:56.133644 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-14 00:59:56.133658 | orchestrator | Monday 14 April 2025 00:46:50 +0000 (0:00:00.324) 0:00:37.220 ********** 2025-04-14 00:59:56.133672 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.133686 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.133700 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.133713 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.133727 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.133741 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.133754 | orchestrator | 2025-04-14 00:59:56.133768 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-14 00:59:56.133792 | orchestrator | Monday 14 April 2025 00:46:51 +0000 (0:00:00.822) 0:00:38.042 ********** 2025-04-14 00:59:56.133812 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.133827 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.133840 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.133854 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.133868 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.133881 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.133895 | orchestrator | 2025-04-14 00:59:56.133909 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-14 00:59:56.133931 | orchestrator | Monday 14 April 2025 00:46:52 +0000 (0:00:01.424) 0:00:39.467 ********** 2025-04-14 00:59:56.133945 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.133959 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.133972 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.134244 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.134262 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.134276 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.134290 | orchestrator | 2025-04-14 00:59:56.134304 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-14 00:59:56.134318 | orchestrator | Monday 14 April 2025 00:46:53 +0000 (0:00:00.836) 0:00:40.303 ********** 2025-04-14 00:59:56.134332 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.134346 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.134359 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.134373 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.134387 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.134401 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.134414 | orchestrator | 2025-04-14 00:59:56.134428 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-14 00:59:56.134442 | orchestrator | Monday 14 April 2025 00:46:54 +0000 (0:00:01.104) 0:00:41.408 ********** 2025-04-14 00:59:56.134456 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.134469 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.134483 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.134497 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.134511 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.134524 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.134538 | orchestrator | 2025-04-14 00:59:56.134552 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-14 00:59:56.134566 | orchestrator | Monday 14 April 2025 00:46:55 +0000 (0:00:00.820) 0:00:42.228 ********** 2025-04-14 00:59:56.134579 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.134593 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.134607 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.134620 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.134634 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.134648 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.134661 | orchestrator | 2025-04-14 00:59:56.134681 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-14 00:59:56.134696 | orchestrator | Monday 14 April 2025 00:46:56 +0000 (0:00:01.223) 0:00:43.451 ********** 2025-04-14 00:59:56.134710 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.134794 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.134810 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.134833 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.134848 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.134862 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.134876 | orchestrator | 2025-04-14 00:59:56.134890 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-14 00:59:56.134905 | orchestrator | Monday 14 April 2025 00:46:57 +0000 (0:00:00.843) 0:00:44.295 ********** 2025-04-14 00:59:56.134920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.134935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.134972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.134992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318a826d-e453-41a1-9cbe-aee990c4d38b', 'scsi-SQEMU_QEMU_HARDDISK_318a826d-e453-41a1-9cbe-aee990c4d38b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d452a86-d7ed-4b7e-a6e2-8adfa0173156', 'scsi-SQEMU_QEMU_HARDDISK_1d452a86-d7ed-4b7e-a6e2-8adfa0173156'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61d8c1b1-8af8-4257-810b-e0715f81f0ca', 'scsi-SQEMU_QEMU_HARDDISK_61d8c1b1-8af8-4257-810b-e0715f81f0ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135500 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.135522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e', 'scsi-SQEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e-part1', 'scsi-SQEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e-part14', 'scsi-SQEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e-part15', 'scsi-SQEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e-part16', 'scsi-SQEMU_QEMU_HARDDISK_513c088e-3162-41df-b822-52bd96b6413e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4c093f95-6486-49b6-be92-05fa28509200', 'scsi-SQEMU_QEMU_HARDDISK_4c093f95-6486-49b6-be92-05fa28509200'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f216f5ad-8b9f-40bf-b892-25305f930110', 'scsi-SQEMU_QEMU_HARDDISK_f216f5ad-8b9f-40bf-b892-25305f930110'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4895685-066a-4248-b20c-4cd40b9ff210', 'scsi-SQEMU_QEMU_HARDDISK_f4895685-066a-4248-b20c-4cd40b9ff210'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135687 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.135707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf', 'scsi-SQEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_ea6d87d8-8d23-4a2a-943a-5d6f418db5cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc318d73-efa9-4c13-b4ab-953b52f9b4b0', 'scsi-SQEMU_QEMU_HARDDISK_fc318d73-efa9-4c13-b4ab-953b52f9b4b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff496d9e-d724-4be6-b701-ae323f1b3d4d', 'scsi-SQEMU_QEMU_HARDDISK_ff496d9e-d724-4be6-b701-ae323f1b3d4d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec5b891c-a93a-4443-952c-376a64ed5153', 'scsi-SQEMU_QEMU_HARDDISK_ec5b891c-a93a-4443-952c-376a64ed5153'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.135852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--010b5855--d3d9--5348--85e9--2943091c3a59-osd--block--010b5855--d3d9--5348--85e9--2943091c3a59', 'dm-uuid-LVM-TqHshLn3iYUe960yiXD5OXZHtSBtOj2m3zisZzkBLeEnn6MTuT90ygDOtuTYvAuF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47a37963--cc76--524e--bf57--deb935e0a7e9-osd--block--47a37963--cc76--524e--bf57--deb935e0a7e9', 'dm-uuid-LVM-y1ZmIyxYKhx4sUrw7xe8MGNMKsxtuS4mjC2i6a3UALT7T4YkxqXjASzL5fefG51j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.135986 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.136000 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part1', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part14', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part15', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part16', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--010b5855--d3d9--5348--85e9--2943091c3a59-osd--block--010b5855--d3d9--5348--85e9--2943091c3a59'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mrf4dD-GL5h-E03t-CBbj-5jPv-pgYj-wsFAyU', 'scsi-0QEMU_QEMU_HARDDISK_c26cfb84-2784-4068-ac39-279abdffc82e', 'scsi-SQEMU_QEMU_HARDDISK_c26cfb84-2784-4068-ac39-279abdffc82e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--47a37963--cc76--524e--bf57--deb935e0a7e9-osd--block--47a37963--cc76--524e--bf57--deb935e0a7e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-93xZn8-IWyK-BjNI-AvCP-us34-mI91-TGygRI', 'scsi-0QEMU_QEMU_HARDDISK_938a8574-ab31-4693-953b-ad06db98cc0e', 'scsi-SQEMU_QEMU_HARDDISK_938a8574-ab31-4693-953b-ad06db98cc0e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0623da07-2b86-4b0f-8ae6-479bebb1d3d2', 'scsi-SQEMU_QEMU_HARDDISK_0623da07-2b86-4b0f-8ae6-479bebb1d3d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89320cc7--f853--5314--9a76--744a2d019bd6-osd--block--89320cc7--f853--5314--9a76--744a2d019bd6', 'dm-uuid-LVM-4BRgDs484beWEfjdIb2VPkFOf4kTQqv5GhNa0iWWdIcvbW9kmd5z0tVFqIiO13G7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8cf203b--da46--5fbb--85f7--5c1db9738ebe-osd--block--a8cf203b--da46--5fbb--85f7--5c1db9738ebe', 'dm-uuid-LVM-sO5HTzVp8cMsaMSKOodkgw3AtLe66zPl0lC0GI1jXxIjQ8TMPrcKSy5BAh3PGT4t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136677 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.136689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part1', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part14', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part15', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part16', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3f558b9--064d--5710--baa4--8e41f44a2baf-osd--block--b3f558b9--064d--5710--baa4--8e41f44a2baf', 'dm-uuid-LVM-852tudrJnju0BoQciiOZqFqgyFmvtD1x0ZzSlD0QeCAtuVTFBUEwL0Xm3fd7KiAZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--89320cc7--f853--5314--9a76--744a2d019bd6-osd--block--89320cc7--f853--5314--9a76--744a2d019bd6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wZfeZr-qMba-0Ko2-INeM-pJEo-LPfT-kSt3gu', 'scsi-0QEMU_QEMU_HARDDISK_676c1686-7068-4aa0-a437-1ca2ad657cc9', 'scsi-SQEMU_QEMU_HARDDISK_676c1686-7068-4aa0-a437-1ca2ad657cc9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e3b39ff--ab1d--556f--9f1e--d127c66e789a-osd--block--1e3b39ff--ab1d--556f--9f1e--d127c66e789a', 'dm-uuid-LVM-lEfTGmpWDk3p7vqZv5369L2FJFmfdaWfdjcJ9RegfbSoJGFOkcQYhswTuJxcd04a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a8cf203b--da46--5fbb--85f7--5c1db9738ebe-osd--block--a8cf203b--da46--5fbb--85f7--5c1db9738ebe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eR6ePA-4v99-Besb-lpQV-et9r-uPvm-AveYNI', 'scsi-0QEMU_QEMU_HARDDISK_64225693-fc38-404b-a874-78411dc3466d', 'scsi-SQEMU_QEMU_HARDDISK_64225693-fc38-404b-a874-78411dc3466d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bda45bef-0c7e-4642-a586-327a75973f57', 'scsi-SQEMU_QEMU_HARDDISK_bda45bef-0c7e-4642-a586-327a75973f57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.136930 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.136943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.136988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.137000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.137026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 00:59:56.137081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.137119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b3f558b9--064d--5710--baa4--8e41f44a2baf-osd--block--b3f558b9--064d--5710--baa4--8e41f44a2baf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VvOAqp-pSDs-CwAn-MWjt-UXVs-306O-ApYEVf', 'scsi-0QEMU_QEMU_HARDDISK_4f96d1f1-65aa-443a-b2b5-a30371495496', 'scsi-SQEMU_QEMU_HARDDISK_4f96d1f1-65aa-443a-b2b5-a30371495496'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.137144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e3b39ff--ab1d--556f--9f1e--d127c66e789a-osd--block--1e3b39ff--ab1d--556f--9f1e--d127c66e789a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CHHBLv-LfMM-0O7E-z7MO-wwRQ-sKY3-phLN6I', 'scsi-0QEMU_QEMU_HARDDISK_d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3', 'scsi-SQEMU_QEMU_HARDDISK_d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.137166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03a3c0ae-ae5b-4103-947a-830f0553055f', 'scsi-SQEMU_QEMU_HARDDISK_03a3c0ae-ae5b-4103-947a-830f0553055f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.137202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 00:59:56.137225 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.137248 | orchestrator | 2025-04-14 00:59:56.137269 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-14 00:59:56.137291 | orchestrator | Monday 14 April 2025 00:47:00 +0000 (0:00:02.709) 0:00:47.005 ********** 2025-04-14 00:59:56.137313 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.137335 | orchestrator | 2025-04-14 00:59:56.137358 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-14 00:59:56.137380 | orchestrator | Monday 14 April 2025 00:47:00 +0000 (0:00:00.578) 0:00:47.583 ********** 2025-04-14 00:59:56.137445 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.137470 | orchestrator | 2025-04-14 00:59:56.137493 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-14 00:59:56.137516 | orchestrator | Monday 14 April 2025 00:47:00 +0000 (0:00:00.258) 0:00:47.842 ********** 2025-04-14 00:59:56.137538 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.137560 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.137574 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.137587 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.137599 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.137611 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.137623 | orchestrator | 2025-04-14 00:59:56.137635 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-14 00:59:56.137648 | orchestrator | Monday 14 April 2025 00:47:02 +0000 (0:00:01.352) 0:00:49.194 ********** 2025-04-14 00:59:56.137660 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.137673 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.137685 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.137697 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.137710 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.137722 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.137734 | orchestrator | 2025-04-14 00:59:56.137746 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-14 00:59:56.137759 | orchestrator | Monday 14 April 2025 00:47:03 +0000 (0:00:01.519) 0:00:50.714 ********** 2025-04-14 00:59:56.137771 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.137783 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.137795 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.137807 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.137819 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.137831 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.137844 | orchestrator | 2025-04-14 00:59:56.137860 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-14 00:59:56.137881 | orchestrator | Monday 14 April 2025 00:47:04 +0000 (0:00:00.810) 0:00:51.524 ********** 2025-04-14 00:59:56.137901 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.137921 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.137941 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.137960 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.137993 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.138085 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.138115 | orchestrator | 2025-04-14 00:59:56.138136 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-14 00:59:56.138156 | orchestrator | Monday 14 April 2025 00:47:05 +0000 (0:00:01.284) 0:00:52.809 ********** 2025-04-14 00:59:56.138169 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.138181 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.138193 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.138205 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.138217 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.138229 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.138242 | orchestrator | 2025-04-14 00:59:56.138254 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-14 00:59:56.138266 | orchestrator | Monday 14 April 2025 00:47:06 +0000 (0:00:00.944) 0:00:53.753 ********** 2025-04-14 00:59:56.138278 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.138291 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.138303 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.138315 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.138327 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.138339 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.138351 | orchestrator | 2025-04-14 00:59:56.138364 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-14 00:59:56.138376 | orchestrator | Monday 14 April 2025 00:47:08 +0000 (0:00:01.388) 0:00:55.142 ********** 2025-04-14 00:59:56.138388 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.138400 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.138412 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.138442 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.138463 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.138482 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.138501 | orchestrator | 2025-04-14 00:59:56.138522 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-14 00:59:56.138541 | orchestrator | Monday 14 April 2025 00:47:09 +0000 (0:00:01.146) 0:00:56.288 ********** 2025-04-14 00:59:56.138560 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.138581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.138603 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-14 00:59:56.138624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.138645 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.138667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-14 00:59:56.138687 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-14 00:59:56.138708 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-14 00:59:56.138729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 00:59:56.138759 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 00:59:56.138781 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.138794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 00:59:56.138807 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-14 00:59:56.138819 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 00:59:56.138831 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-14 00:59:56.138843 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.138856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 00:59:56.138868 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.138881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 00:59:56.138893 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 00:59:56.138915 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.138927 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 00:59:56.138939 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 00:59:56.138951 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.138964 | orchestrator | 2025-04-14 00:59:56.138976 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-14 00:59:56.138988 | orchestrator | Monday 14 April 2025 00:47:12 +0000 (0:00:03.046) 0:00:59.335 ********** 2025-04-14 00:59:56.139000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.139013 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.139025 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-14 00:59:56.139065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.139084 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-14 00:59:56.139096 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-14 00:59:56.139108 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.139121 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-14 00:59:56.139133 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-14 00:59:56.139145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 00:59:56.139158 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 00:59:56.139170 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-14 00:59:56.139182 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.139194 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.139207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 00:59:56.139219 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 00:59:56.139231 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 00:59:56.139244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 00:59:56.139256 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.139268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 00:59:56.139289 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 00:59:56.139303 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.139315 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 00:59:56.139327 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.139339 | orchestrator | 2025-04-14 00:59:56.139352 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-14 00:59:56.139364 | orchestrator | Monday 14 April 2025 00:47:16 +0000 (0:00:03.794) 0:01:03.130 ********** 2025-04-14 00:59:56.139376 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 00:59:56.139389 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-14 00:59:56.139402 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-04-14 00:59:56.139414 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-04-14 00:59:56.139426 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-14 00:59:56.139439 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-04-14 00:59:56.139451 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-14 00:59:56.139463 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-14 00:59:56.139475 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-04-14 00:59:56.139487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-14 00:59:56.139500 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-14 00:59:56.139512 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-14 00:59:56.139524 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-04-14 00:59:56.139536 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-04-14 00:59:56.139555 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-14 00:59:56.139567 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-14 00:59:56.139580 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-14 00:59:56.139592 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-14 00:59:56.139604 | orchestrator | 2025-04-14 00:59:56.139617 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-14 00:59:56.139629 | orchestrator | Monday 14 April 2025 00:47:24 +0000 (0:00:08.505) 0:01:11.636 ********** 2025-04-14 00:59:56.139641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.139654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.139666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.139678 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.139690 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-14 00:59:56.139702 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-14 00:59:56.139715 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-14 00:59:56.139727 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-14 00:59:56.139739 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-14 00:59:56.139752 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-14 00:59:56.139766 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.139794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 00:59:56.139814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 00:59:56.139834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 00:59:56.139852 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.139872 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 00:59:56.139894 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 00:59:56.139907 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.139924 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 00:59:56.139945 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.139965 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 00:59:56.139985 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 00:59:56.140006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 00:59:56.140091 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.140111 | orchestrator | 2025-04-14 00:59:56.140124 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-14 00:59:56.140137 | orchestrator | Monday 14 April 2025 00:47:26 +0000 (0:00:01.514) 0:01:13.151 ********** 2025-04-14 00:59:56.140149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.140167 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.140180 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-04-14 00:59:56.140192 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-04-14 00:59:56.140205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.140217 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-04-14 00:59:56.140229 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-04-14 00:59:56.140242 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-04-14 00:59:56.140254 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-04-14 00:59:56.140266 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.140279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 00:59:56.140291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 00:59:56.140303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 00:59:56.140325 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.140336 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.140347 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.140357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 00:59:56.140374 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 00:59:56.140385 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 00:59:56.140395 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.140405 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 00:59:56.140415 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 00:59:56.140425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 00:59:56.140436 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.140446 | orchestrator | 2025-04-14 00:59:56.140456 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-14 00:59:56.140466 | orchestrator | Monday 14 April 2025 00:47:27 +0000 (0:00:00.862) 0:01:14.013 ********** 2025-04-14 00:59:56.140477 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-14 00:59:56.140487 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 00:59:56.140497 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 00:59:56.140508 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-14 00:59:56.140518 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-04-14 00:59:56.140532 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 00:59:56.140542 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-14 00:59:56.140552 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 00:59:56.140562 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-04-14 00:59:56.140572 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-14 00:59:56.140582 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 00:59:56.140592 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 00:59:56.140603 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-14 00:59:56.140613 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 00:59:56.140623 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 00:59:56.140633 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.140643 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.140653 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-14 00:59:56.140663 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 00:59:56.140673 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 00:59:56.140683 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.140694 | orchestrator | 2025-04-14 00:59:56.140704 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-14 00:59:56.140715 | orchestrator | Monday 14 April 2025 00:47:28 +0000 (0:00:01.220) 0:01:15.234 ********** 2025-04-14 00:59:56.140725 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.140735 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.140756 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.140766 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.140777 | orchestrator | 2025-04-14 00:59:56.140787 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.140797 | orchestrator | Monday 14 April 2025 00:47:29 +0000 (0:00:01.275) 0:01:16.509 ********** 2025-04-14 00:59:56.140808 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.140818 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.140828 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.140838 | orchestrator | 2025-04-14 00:59:56.140848 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.140858 | orchestrator | Monday 14 April 2025 00:47:30 +0000 (0:00:00.606) 0:01:17.116 ********** 2025-04-14 00:59:56.140868 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.140878 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.140888 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.140898 | orchestrator | 2025-04-14 00:59:56.140909 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.140919 | orchestrator | Monday 14 April 2025 00:47:31 +0000 (0:00:00.785) 0:01:17.902 ********** 2025-04-14 00:59:56.140929 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.140939 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.140949 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.140959 | orchestrator | 2025-04-14 00:59:56.140969 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.140979 | orchestrator | Monday 14 April 2025 00:47:31 +0000 (0:00:00.510) 0:01:18.412 ********** 2025-04-14 00:59:56.140989 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.140999 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.141010 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.141020 | orchestrator | 2025-04-14 00:59:56.141030 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.141062 | orchestrator | Monday 14 April 2025 00:47:32 +0000 (0:00:00.751) 0:01:19.164 ********** 2025-04-14 00:59:56.141073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.141083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.141093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.141104 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141114 | orchestrator | 2025-04-14 00:59:56.141125 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.141135 | orchestrator | Monday 14 April 2025 00:47:32 +0000 (0:00:00.578) 0:01:19.742 ********** 2025-04-14 00:59:56.141145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.141155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.141165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.141175 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141185 | orchestrator | 2025-04-14 00:59:56.141195 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.141206 | orchestrator | Monday 14 April 2025 00:47:33 +0000 (0:00:00.977) 0:01:20.720 ********** 2025-04-14 00:59:56.141215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.141226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.141236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.141246 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141261 | orchestrator | 2025-04-14 00:59:56.141271 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.141282 | orchestrator | Monday 14 April 2025 00:47:34 +0000 (0:00:00.771) 0:01:21.491 ********** 2025-04-14 00:59:56.141297 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.141308 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.141318 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.141332 | orchestrator | 2025-04-14 00:59:56.141342 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.141353 | orchestrator | Monday 14 April 2025 00:47:35 +0000 (0:00:00.953) 0:01:22.445 ********** 2025-04-14 00:59:56.141363 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-14 00:59:56.141373 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-14 00:59:56.141384 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-14 00:59:56.141394 | orchestrator | 2025-04-14 00:59:56.141404 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.141414 | orchestrator | Monday 14 April 2025 00:47:37 +0000 (0:00:02.196) 0:01:24.641 ********** 2025-04-14 00:59:56.141424 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141434 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.141445 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.141455 | orchestrator | 2025-04-14 00:59:56.141465 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.141475 | orchestrator | Monday 14 April 2025 00:47:38 +0000 (0:00:00.747) 0:01:25.388 ********** 2025-04-14 00:59:56.141485 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141495 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.141506 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.141516 | orchestrator | 2025-04-14 00:59:56.141526 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.141536 | orchestrator | Monday 14 April 2025 00:47:39 +0000 (0:00:00.873) 0:01:26.262 ********** 2025-04-14 00:59:56.141546 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.141557 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141567 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.141577 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.141587 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.141597 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.141608 | orchestrator | 2025-04-14 00:59:56.141618 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.141628 | orchestrator | Monday 14 April 2025 00:47:40 +0000 (0:00:00.766) 0:01:27.028 ********** 2025-04-14 00:59:56.141638 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.141649 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141659 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.141669 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.141679 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.141689 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.141700 | orchestrator | 2025-04-14 00:59:56.141714 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.141724 | orchestrator | Monday 14 April 2025 00:47:41 +0000 (0:00:00.974) 0:01:28.003 ********** 2025-04-14 00:59:56.141734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.141745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 00:59:56.141755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.141765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 00:59:56.141775 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 00:59:56.141785 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.141795 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.141810 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141821 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 00:59:56.141831 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 00:59:56.141845 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 00:59:56.141856 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.141866 | orchestrator | 2025-04-14 00:59:56.141877 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-14 00:59:56.141887 | orchestrator | Monday 14 April 2025 00:47:42 +0000 (0:00:01.146) 0:01:29.149 ********** 2025-04-14 00:59:56.141897 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.141907 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.141917 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.141927 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.141937 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.141947 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.141958 | orchestrator | 2025-04-14 00:59:56.141968 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-14 00:59:56.141978 | orchestrator | Monday 14 April 2025 00:47:43 +0000 (0:00:00.877) 0:01:30.027 ********** 2025-04-14 00:59:56.141988 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 00:59:56.141999 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 00:59:56.142009 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 00:59:56.142572 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-14 00:59:56.142602 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-14 00:59:56.142613 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-14 00:59:56.142623 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-14 00:59:56.142633 | orchestrator | 2025-04-14 00:59:56.142643 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-14 00:59:56.142653 | orchestrator | Monday 14 April 2025 00:47:44 +0000 (0:00:01.179) 0:01:31.207 ********** 2025-04-14 00:59:56.142664 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 00:59:56.142674 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 00:59:56.142684 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 00:59:56.142694 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-14 00:59:56.142704 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-14 00:59:56.142714 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-14 00:59:56.142724 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-14 00:59:56.142734 | orchestrator | 2025-04-14 00:59:56.142744 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-14 00:59:56.142754 | orchestrator | Monday 14 April 2025 00:47:47 +0000 (0:00:02.787) 0:01:33.994 ********** 2025-04-14 00:59:56.142764 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.142776 | orchestrator | 2025-04-14 00:59:56.142786 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-14 00:59:56.142796 | orchestrator | Monday 14 April 2025 00:47:48 +0000 (0:00:01.463) 0:01:35.458 ********** 2025-04-14 00:59:56.142806 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.142816 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.142827 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.142837 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.142857 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.142868 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.142878 | orchestrator | 2025-04-14 00:59:56.142888 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-14 00:59:56.142898 | orchestrator | Monday 14 April 2025 00:47:49 +0000 (0:00:01.197) 0:01:36.656 ********** 2025-04-14 00:59:56.142908 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.142918 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.142928 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.142938 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.142948 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.142959 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.142969 | orchestrator | 2025-04-14 00:59:56.142979 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-14 00:59:56.142989 | orchestrator | Monday 14 April 2025 00:47:51 +0000 (0:00:01.991) 0:01:38.647 ********** 2025-04-14 00:59:56.142999 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.143009 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.143018 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.143028 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.143087 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.143098 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.143108 | orchestrator | 2025-04-14 00:59:56.143118 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-14 00:59:56.143129 | orchestrator | Monday 14 April 2025 00:47:53 +0000 (0:00:01.230) 0:01:39.878 ********** 2025-04-14 00:59:56.143139 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.143149 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.143161 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.143172 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.143183 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.143194 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.143205 | orchestrator | 2025-04-14 00:59:56.143217 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-14 00:59:56.143228 | orchestrator | Monday 14 April 2025 00:47:54 +0000 (0:00:01.574) 0:01:41.452 ********** 2025-04-14 00:59:56.143239 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.143250 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.143370 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.143403 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.143422 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.143439 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.143455 | orchestrator | 2025-04-14 00:59:56.143468 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-14 00:59:56.143478 | orchestrator | Monday 14 April 2025 00:47:55 +0000 (0:00:00.935) 0:01:42.388 ********** 2025-04-14 00:59:56.143488 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.143497 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.143507 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.143515 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.143524 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.143532 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.143541 | orchestrator | 2025-04-14 00:59:56.143549 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-14 00:59:56.143558 | orchestrator | Monday 14 April 2025 00:47:57 +0000 (0:00:01.605) 0:01:43.993 ********** 2025-04-14 00:59:56.143566 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.143575 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.143584 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.143592 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.143601 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.143609 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.143618 | orchestrator | 2025-04-14 00:59:56.143627 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-14 00:59:56.143643 | orchestrator | Monday 14 April 2025 00:47:57 +0000 (0:00:00.797) 0:01:44.791 ********** 2025-04-14 00:59:56.143652 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.143661 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.143669 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.143678 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.143686 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.143694 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.143703 | orchestrator | 2025-04-14 00:59:56.143712 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-14 00:59:56.143720 | orchestrator | Monday 14 April 2025 00:47:58 +0000 (0:00:00.926) 0:01:45.717 ********** 2025-04-14 00:59:56.143728 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.143737 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.143745 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.143754 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.143762 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.143771 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.143779 | orchestrator | 2025-04-14 00:59:56.143788 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-14 00:59:56.143797 | orchestrator | Monday 14 April 2025 00:47:59 +0000 (0:00:00.642) 0:01:46.359 ********** 2025-04-14 00:59:56.143805 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.143814 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.143822 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.143830 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.143839 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.143848 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.143856 | orchestrator | 2025-04-14 00:59:56.143865 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-14 00:59:56.143873 | orchestrator | Monday 14 April 2025 00:48:00 +0000 (0:00:00.955) 0:01:47.315 ********** 2025-04-14 00:59:56.143882 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.143890 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.143899 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.143907 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.143916 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.143925 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.143933 | orchestrator | 2025-04-14 00:59:56.143942 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-14 00:59:56.143950 | orchestrator | Monday 14 April 2025 00:48:01 +0000 (0:00:01.047) 0:01:48.363 ********** 2025-04-14 00:59:56.143959 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.143968 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.143976 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.143985 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.143993 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.144002 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.144010 | orchestrator | 2025-04-14 00:59:56.144019 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-14 00:59:56.144027 | orchestrator | Monday 14 April 2025 00:48:02 +0000 (0:00:00.851) 0:01:49.214 ********** 2025-04-14 00:59:56.144053 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.144062 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.144070 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.144101 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.144110 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.144118 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.144127 | orchestrator | 2025-04-14 00:59:56.144136 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-14 00:59:56.144144 | orchestrator | Monday 14 April 2025 00:48:03 +0000 (0:00:00.654) 0:01:49.869 ********** 2025-04-14 00:59:56.144153 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.144173 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.144182 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.144191 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.144199 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.144207 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.144216 | orchestrator | 2025-04-14 00:59:56.144224 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-14 00:59:56.144233 | orchestrator | Monday 14 April 2025 00:48:03 +0000 (0:00:00.900) 0:01:50.769 ********** 2025-04-14 00:59:56.144241 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.144249 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.144258 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.144266 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.144275 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.144283 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.144291 | orchestrator | 2025-04-14 00:59:56.144300 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-14 00:59:56.144374 | orchestrator | Monday 14 April 2025 00:48:04 +0000 (0:00:00.860) 0:01:51.630 ********** 2025-04-14 00:59:56.144388 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.144398 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.144407 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.144416 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.144426 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.144435 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.144445 | orchestrator | 2025-04-14 00:59:56.144454 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-14 00:59:56.144466 | orchestrator | Monday 14 April 2025 00:48:05 +0000 (0:00:01.144) 0:01:52.775 ********** 2025-04-14 00:59:56.144481 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.144495 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.144509 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.144523 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.144536 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.144550 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.144564 | orchestrator | 2025-04-14 00:59:56.144578 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-14 00:59:56.144593 | orchestrator | Monday 14 April 2025 00:48:06 +0000 (0:00:00.734) 0:01:53.509 ********** 2025-04-14 00:59:56.144607 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.144620 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.144629 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.144638 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.144646 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.144654 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.144663 | orchestrator | 2025-04-14 00:59:56.144671 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-14 00:59:56.144680 | orchestrator | Monday 14 April 2025 00:48:07 +0000 (0:00:00.869) 0:01:54.379 ********** 2025-04-14 00:59:56.144688 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.144697 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.144705 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.144714 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.144722 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.144731 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.144739 | orchestrator | 2025-04-14 00:59:56.144748 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-14 00:59:56.144757 | orchestrator | Monday 14 April 2025 00:48:08 +0000 (0:00:00.729) 0:01:55.109 ********** 2025-04-14 00:59:56.144765 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.144773 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.144782 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.144790 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.144799 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.144807 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.144822 | orchestrator | 2025-04-14 00:59:56.144831 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-14 00:59:56.144844 | orchestrator | Monday 14 April 2025 00:48:09 +0000 (0:00:00.908) 0:01:56.017 ********** 2025-04-14 00:59:56.144853 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.144861 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.144870 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.144878 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.144887 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.144895 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.144903 | orchestrator | 2025-04-14 00:59:56.144912 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-14 00:59:56.144920 | orchestrator | Monday 14 April 2025 00:48:09 +0000 (0:00:00.653) 0:01:56.671 ********** 2025-04-14 00:59:56.144929 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.144937 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.144946 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.144958 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.144967 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.144975 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.144984 | orchestrator | 2025-04-14 00:59:56.144992 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-14 00:59:56.145001 | orchestrator | Monday 14 April 2025 00:48:10 +0000 (0:00:00.959) 0:01:57.630 ********** 2025-04-14 00:59:56.145011 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145020 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145030 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145059 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145069 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145078 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145088 | orchestrator | 2025-04-14 00:59:56.145097 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-14 00:59:56.145107 | orchestrator | Monday 14 April 2025 00:48:11 +0000 (0:00:00.795) 0:01:58.425 ********** 2025-04-14 00:59:56.145116 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145126 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145135 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145145 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145154 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145163 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145173 | orchestrator | 2025-04-14 00:59:56.145182 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-14 00:59:56.145192 | orchestrator | Monday 14 April 2025 00:48:12 +0000 (0:00:00.957) 0:01:59.383 ********** 2025-04-14 00:59:56.145201 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145210 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145220 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145229 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145238 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145248 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145257 | orchestrator | 2025-04-14 00:59:56.145266 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-14 00:59:56.145276 | orchestrator | Monday 14 April 2025 00:48:13 +0000 (0:00:00.967) 0:02:00.350 ********** 2025-04-14 00:59:56.145285 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145294 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145303 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145313 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145322 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145332 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145341 | orchestrator | 2025-04-14 00:59:56.145414 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-14 00:59:56.145437 | orchestrator | Monday 14 April 2025 00:48:14 +0000 (0:00:01.189) 0:02:01.540 ********** 2025-04-14 00:59:56.145446 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145454 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145463 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145471 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145480 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145488 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145497 | orchestrator | 2025-04-14 00:59:56.145505 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-14 00:59:56.145514 | orchestrator | Monday 14 April 2025 00:48:15 +0000 (0:00:01.107) 0:02:02.647 ********** 2025-04-14 00:59:56.145523 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145531 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145540 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145549 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145557 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145566 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145574 | orchestrator | 2025-04-14 00:59:56.145583 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-14 00:59:56.145591 | orchestrator | Monday 14 April 2025 00:48:17 +0000 (0:00:01.426) 0:02:04.074 ********** 2025-04-14 00:59:56.145600 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145611 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145625 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145639 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145652 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145666 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145678 | orchestrator | 2025-04-14 00:59:56.145692 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-14 00:59:56.145707 | orchestrator | Monday 14 April 2025 00:48:18 +0000 (0:00:00.986) 0:02:05.061 ********** 2025-04-14 00:59:56.145721 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145735 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145744 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145752 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145761 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145769 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145784 | orchestrator | 2025-04-14 00:59:56.145792 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-14 00:59:56.145801 | orchestrator | Monday 14 April 2025 00:48:19 +0000 (0:00:00.850) 0:02:05.911 ********** 2025-04-14 00:59:56.145809 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145817 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145826 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145834 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145843 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145851 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145859 | orchestrator | 2025-04-14 00:59:56.145868 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-14 00:59:56.145877 | orchestrator | Monday 14 April 2025 00:48:19 +0000 (0:00:00.661) 0:02:06.573 ********** 2025-04-14 00:59:56.145885 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.145894 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.145902 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.145910 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.145919 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.145945 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.145954 | orchestrator | 2025-04-14 00:59:56.145962 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-14 00:59:56.145971 | orchestrator | Monday 14 April 2025 00:48:20 +0000 (0:00:00.928) 0:02:07.502 ********** 2025-04-14 00:59:56.145986 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.145995 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.146003 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.146012 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.146089 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.146100 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.146109 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.146119 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.146129 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.146138 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.146148 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.146157 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.146167 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.146176 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.146186 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.146195 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.146209 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.146219 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.146228 | orchestrator | 2025-04-14 00:59:56.146238 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-14 00:59:56.146248 | orchestrator | Monday 14 April 2025 00:48:21 +0000 (0:00:00.758) 0:02:08.260 ********** 2025-04-14 00:59:56.146258 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-14 00:59:56.146267 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-14 00:59:56.146277 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.146287 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-14 00:59:56.146296 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-14 00:59:56.146305 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.146315 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-14 00:59:56.146325 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-14 00:59:56.146400 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.146413 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-14 00:59:56.146422 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-14 00:59:56.146431 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.146440 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-14 00:59:56.146448 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-14 00:59:56.146457 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.146466 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-14 00:59:56.146474 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-14 00:59:56.146483 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.146492 | orchestrator | 2025-04-14 00:59:56.146500 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-14 00:59:56.146509 | orchestrator | Monday 14 April 2025 00:48:22 +0000 (0:00:00.978) 0:02:09.239 ********** 2025-04-14 00:59:56.146518 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.146526 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.146535 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.146543 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.146552 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.146560 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.146569 | orchestrator | 2025-04-14 00:59:56.146578 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-14 00:59:56.146586 | orchestrator | Monday 14 April 2025 00:48:23 +0000 (0:00:00.633) 0:02:09.873 ********** 2025-04-14 00:59:56.146595 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.146610 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.146618 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.146626 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.146634 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.146642 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.146650 | orchestrator | 2025-04-14 00:59:56.146658 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.146667 | orchestrator | Monday 14 April 2025 00:48:23 +0000 (0:00:00.879) 0:02:10.752 ********** 2025-04-14 00:59:56.146675 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.146683 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.146691 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.146699 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.146706 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.146714 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.146722 | orchestrator | 2025-04-14 00:59:56.146730 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.146738 | orchestrator | Monday 14 April 2025 00:48:24 +0000 (0:00:00.622) 0:02:11.375 ********** 2025-04-14 00:59:56.146746 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.146754 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.146762 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.146770 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.146778 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.146786 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.146794 | orchestrator | 2025-04-14 00:59:56.146802 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.146810 | orchestrator | Monday 14 April 2025 00:48:25 +0000 (0:00:00.870) 0:02:12.246 ********** 2025-04-14 00:59:56.146818 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.146826 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.146834 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.146846 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.146854 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.146862 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.146870 | orchestrator | 2025-04-14 00:59:56.146881 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.146889 | orchestrator | Monday 14 April 2025 00:48:26 +0000 (0:00:00.650) 0:02:12.897 ********** 2025-04-14 00:59:56.146897 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.146905 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.146915 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.146928 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.146942 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.146955 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.146967 | orchestrator | 2025-04-14 00:59:56.146980 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.146993 | orchestrator | Monday 14 April 2025 00:48:26 +0000 (0:00:00.905) 0:02:13.802 ********** 2025-04-14 00:59:56.147007 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.147020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.147031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.147059 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147068 | orchestrator | 2025-04-14 00:59:56.147077 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.147086 | orchestrator | Monday 14 April 2025 00:48:27 +0000 (0:00:00.457) 0:02:14.260 ********** 2025-04-14 00:59:56.147094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.147103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.147112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.147127 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147136 | orchestrator | 2025-04-14 00:59:56.147144 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.147153 | orchestrator | Monday 14 April 2025 00:48:27 +0000 (0:00:00.450) 0:02:14.710 ********** 2025-04-14 00:59:56.147161 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.147170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.147179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.147243 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147255 | orchestrator | 2025-04-14 00:59:56.147264 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.147273 | orchestrator | Monday 14 April 2025 00:48:28 +0000 (0:00:00.477) 0:02:15.187 ********** 2025-04-14 00:59:56.147282 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147291 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.147299 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.147308 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.147317 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.147325 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.147334 | orchestrator | 2025-04-14 00:59:56.147343 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.147351 | orchestrator | Monday 14 April 2025 00:48:29 +0000 (0:00:00.846) 0:02:16.034 ********** 2025-04-14 00:59:56.147360 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.147369 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147378 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.147386 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.147394 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.147402 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.147410 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.147418 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.147425 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.147433 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.147441 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.147449 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.147457 | orchestrator | 2025-04-14 00:59:56.147465 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.147473 | orchestrator | Monday 14 April 2025 00:48:29 +0000 (0:00:00.813) 0:02:16.848 ********** 2025-04-14 00:59:56.147507 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147515 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.147523 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.147531 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.147539 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.147547 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.147555 | orchestrator | 2025-04-14 00:59:56.147563 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.147571 | orchestrator | Monday 14 April 2025 00:48:30 +0000 (0:00:00.828) 0:02:17.677 ********** 2025-04-14 00:59:56.147579 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147587 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.147594 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.147602 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.147610 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.147618 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.147626 | orchestrator | 2025-04-14 00:59:56.147634 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.147642 | orchestrator | Monday 14 April 2025 00:48:31 +0000 (0:00:00.671) 0:02:18.348 ********** 2025-04-14 00:59:56.147650 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.147663 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147671 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.147679 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.147687 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.147695 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.147703 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.147710 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.147718 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.147726 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.147734 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.147742 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.147750 | orchestrator | 2025-04-14 00:59:56.147758 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.147766 | orchestrator | Monday 14 April 2025 00:48:32 +0000 (0:00:01.101) 0:02:19.449 ********** 2025-04-14 00:59:56.147774 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147781 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.147806 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.147815 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.147824 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.147836 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.147845 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.147854 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.147863 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.147871 | orchestrator | 2025-04-14 00:59:56.147880 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.147889 | orchestrator | Monday 14 April 2025 00:48:33 +0000 (0:00:00.735) 0:02:20.184 ********** 2025-04-14 00:59:56.147898 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.147906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.147915 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.147924 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-14 00:59:56.147932 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-14 00:59:56.147941 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-14 00:59:56.147950 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.147961 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-14 00:59:56.148018 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-14 00:59:56.148030 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-14 00:59:56.148056 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.148064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.148072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.148080 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.148088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.148096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 00:59:56.148105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 00:59:56.148112 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 00:59:56.148121 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.148138 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.148152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 00:59:56.148173 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 00:59:56.148185 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 00:59:56.148199 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.148212 | orchestrator | 2025-04-14 00:59:56.148225 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-14 00:59:56.148237 | orchestrator | Monday 14 April 2025 00:48:35 +0000 (0:00:01.874) 0:02:22.059 ********** 2025-04-14 00:59:56.148250 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.148262 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.148275 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.148287 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.148301 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.148314 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.148328 | orchestrator | 2025-04-14 00:59:56.148341 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-14 00:59:56.148353 | orchestrator | Monday 14 April 2025 00:48:36 +0000 (0:00:01.594) 0:02:23.654 ********** 2025-04-14 00:59:56.148364 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.148373 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.148380 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.148388 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.148396 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.148404 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-14 00:59:56.148412 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.148420 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-14 00:59:56.148428 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.148436 | orchestrator | 2025-04-14 00:59:56.148444 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-14 00:59:56.148452 | orchestrator | Monday 14 April 2025 00:48:38 +0000 (0:00:01.443) 0:02:25.098 ********** 2025-04-14 00:59:56.148460 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.148467 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.148475 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.148483 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.148491 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.148499 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.148507 | orchestrator | 2025-04-14 00:59:56.148515 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-14 00:59:56.148523 | orchestrator | Monday 14 April 2025 00:48:39 +0000 (0:00:01.727) 0:02:26.826 ********** 2025-04-14 00:59:56.148531 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.148539 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.148546 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.148554 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.148562 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.148570 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.148578 | orchestrator | 2025-04-14 00:59:56.148585 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-04-14 00:59:56.148593 | orchestrator | Monday 14 April 2025 00:48:41 +0000 (0:00:01.452) 0:02:28.278 ********** 2025-04-14 00:59:56.148601 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.148609 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.148617 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.148624 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.148632 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.148640 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.148648 | orchestrator | 2025-04-14 00:59:56.148665 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-04-14 00:59:56.148674 | orchestrator | Monday 14 April 2025 00:48:43 +0000 (0:00:01.718) 0:02:29.997 ********** 2025-04-14 00:59:56.148683 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.148692 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.148707 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.148716 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.148724 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.148733 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.148741 | orchestrator | 2025-04-14 00:59:56.148750 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-04-14 00:59:56.148759 | orchestrator | Monday 14 April 2025 00:48:45 +0000 (0:00:01.941) 0:02:31.938 ********** 2025-04-14 00:59:56.148769 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.148778 | orchestrator | 2025-04-14 00:59:56.148787 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-04-14 00:59:56.148796 | orchestrator | Monday 14 April 2025 00:48:46 +0000 (0:00:01.297) 0:02:33.236 ********** 2025-04-14 00:59:56.148805 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.148813 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.148823 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.148831 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.148840 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.148848 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.148857 | orchestrator | 2025-04-14 00:59:56.148921 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-04-14 00:59:56.148933 | orchestrator | Monday 14 April 2025 00:48:47 +0000 (0:00:00.898) 0:02:34.134 ********** 2025-04-14 00:59:56.148943 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.148951 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.148960 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.148969 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.148977 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.148986 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.148999 | orchestrator | 2025-04-14 00:59:56.149008 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-04-14 00:59:56.149016 | orchestrator | Monday 14 April 2025 00:48:47 +0000 (0:00:00.626) 0:02:34.761 ********** 2025-04-14 00:59:56.149024 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-14 00:59:56.149032 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-14 00:59:56.149055 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-14 00:59:56.149064 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-14 00:59:56.149072 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-14 00:59:56.149079 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-14 00:59:56.149087 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-14 00:59:56.149095 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-14 00:59:56.149103 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-04-14 00:59:56.149111 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-14 00:59:56.149119 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-14 00:59:56.149128 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-04-14 00:59:56.149135 | orchestrator | 2025-04-14 00:59:56.149143 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-04-14 00:59:56.149152 | orchestrator | Monday 14 April 2025 00:48:49 +0000 (0:00:01.752) 0:02:36.513 ********** 2025-04-14 00:59:56.149160 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.149167 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.149185 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.149193 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.149201 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.149209 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.149217 | orchestrator | 2025-04-14 00:59:56.149225 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-04-14 00:59:56.149233 | orchestrator | Monday 14 April 2025 00:48:50 +0000 (0:00:01.093) 0:02:37.606 ********** 2025-04-14 00:59:56.149241 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.149249 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.149257 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.149265 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.149273 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.149280 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.149288 | orchestrator | 2025-04-14 00:59:56.149296 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-04-14 00:59:56.149304 | orchestrator | Monday 14 April 2025 00:48:51 +0000 (0:00:00.976) 0:02:38.582 ********** 2025-04-14 00:59:56.149313 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.149321 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.149329 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.149337 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.149345 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.149352 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.149360 | orchestrator | 2025-04-14 00:59:56.149368 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-04-14 00:59:56.149376 | orchestrator | Monday 14 April 2025 00:48:52 +0000 (0:00:00.633) 0:02:39.216 ********** 2025-04-14 00:59:56.149384 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.149393 | orchestrator | 2025-04-14 00:59:56.149401 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-04-14 00:59:56.149409 | orchestrator | Monday 14 April 2025 00:48:53 +0000 (0:00:01.434) 0:02:40.650 ********** 2025-04-14 00:59:56.149417 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.149425 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.149433 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.149441 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.149449 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.149457 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.149465 | orchestrator | 2025-04-14 00:59:56.149476 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-04-14 00:59:56.149501 | orchestrator | Monday 14 April 2025 00:49:24 +0000 (0:00:30.648) 0:03:11.299 ********** 2025-04-14 00:59:56.149509 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-14 00:59:56.149517 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-14 00:59:56.149525 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-14 00:59:56.149533 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.149541 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-14 00:59:56.149549 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-14 00:59:56.149605 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-14 00:59:56.149616 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.149625 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-14 00:59:56.149633 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-14 00:59:56.149641 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-14 00:59:56.149649 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.149663 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-14 00:59:56.149671 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-14 00:59:56.149679 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-14 00:59:56.149687 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.149695 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-14 00:59:56.149703 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-14 00:59:56.149711 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-14 00:59:56.149719 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.149726 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-04-14 00:59:56.149734 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-04-14 00:59:56.149742 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-04-14 00:59:56.149750 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.149758 | orchestrator | 2025-04-14 00:59:56.149766 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-04-14 00:59:56.149774 | orchestrator | Monday 14 April 2025 00:49:25 +0000 (0:00:00.986) 0:03:12.285 ********** 2025-04-14 00:59:56.149782 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.149790 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.149798 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.149806 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.149814 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.149822 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.149829 | orchestrator | 2025-04-14 00:59:56.149837 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-04-14 00:59:56.149845 | orchestrator | Monday 14 April 2025 00:49:26 +0000 (0:00:00.709) 0:03:12.995 ********** 2025-04-14 00:59:56.149853 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.149861 | orchestrator | 2025-04-14 00:59:56.149869 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-04-14 00:59:56.149877 | orchestrator | Monday 14 April 2025 00:49:26 +0000 (0:00:00.173) 0:03:13.169 ********** 2025-04-14 00:59:56.149885 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.149893 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.149901 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.149909 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.149917 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.149925 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.149933 | orchestrator | 2025-04-14 00:59:56.149941 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-04-14 00:59:56.149949 | orchestrator | Monday 14 April 2025 00:49:27 +0000 (0:00:01.068) 0:03:14.237 ********** 2025-04-14 00:59:56.149957 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.149964 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.149972 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.149980 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.149988 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.149996 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150003 | orchestrator | 2025-04-14 00:59:56.150011 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-04-14 00:59:56.150077 | orchestrator | Monday 14 April 2025 00:49:28 +0000 (0:00:00.801) 0:03:15.038 ********** 2025-04-14 00:59:56.150085 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.150093 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.150106 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.150114 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.150121 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.150132 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150140 | orchestrator | 2025-04-14 00:59:56.150146 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-04-14 00:59:56.150157 | orchestrator | Monday 14 April 2025 00:49:29 +0000 (0:00:00.999) 0:03:16.038 ********** 2025-04-14 00:59:56.150164 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.150171 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.150178 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.150184 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.150191 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.150198 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.150205 | orchestrator | 2025-04-14 00:59:56.150212 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-04-14 00:59:56.150219 | orchestrator | Monday 14 April 2025 00:49:32 +0000 (0:00:03.646) 0:03:19.684 ********** 2025-04-14 00:59:56.150226 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.150233 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.150239 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.150246 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.150253 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.150260 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.150267 | orchestrator | 2025-04-14 00:59:56.150274 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-04-14 00:59:56.150281 | orchestrator | Monday 14 April 2025 00:49:33 +0000 (0:00:00.695) 0:03:20.379 ********** 2025-04-14 00:59:56.150288 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.150297 | orchestrator | 2025-04-14 00:59:56.150348 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-04-14 00:59:56.150358 | orchestrator | Monday 14 April 2025 00:49:34 +0000 (0:00:01.316) 0:03:21.695 ********** 2025-04-14 00:59:56.150365 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.150372 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.150379 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.150386 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.150393 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.150400 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150407 | orchestrator | 2025-04-14 00:59:56.150414 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-04-14 00:59:56.150421 | orchestrator | Monday 14 April 2025 00:49:35 +0000 (0:00:01.056) 0:03:22.752 ********** 2025-04-14 00:59:56.150428 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.150435 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.150442 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.150449 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.150456 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.150463 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150470 | orchestrator | 2025-04-14 00:59:56.150477 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-04-14 00:59:56.150484 | orchestrator | Monday 14 April 2025 00:49:36 +0000 (0:00:00.875) 0:03:23.627 ********** 2025-04-14 00:59:56.150490 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.150497 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.150504 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.150511 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.150518 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.150525 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150532 | orchestrator | 2025-04-14 00:59:56.150539 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-04-14 00:59:56.150546 | orchestrator | Monday 14 April 2025 00:49:37 +0000 (0:00:01.040) 0:03:24.668 ********** 2025-04-14 00:59:56.150640 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.150647 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.150660 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.150667 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.150674 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.150681 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150688 | orchestrator | 2025-04-14 00:59:56.150695 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-04-14 00:59:56.150702 | orchestrator | Monday 14 April 2025 00:49:38 +0000 (0:00:00.879) 0:03:25.547 ********** 2025-04-14 00:59:56.150709 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.150716 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.150723 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.150730 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.150737 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.150743 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150750 | orchestrator | 2025-04-14 00:59:56.150757 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-04-14 00:59:56.150764 | orchestrator | Monday 14 April 2025 00:49:39 +0000 (0:00:01.015) 0:03:26.563 ********** 2025-04-14 00:59:56.150771 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.150778 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.150800 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.150807 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.150814 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.150825 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150832 | orchestrator | 2025-04-14 00:59:56.150839 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-04-14 00:59:56.150846 | orchestrator | Monday 14 April 2025 00:49:40 +0000 (0:00:01.019) 0:03:27.582 ********** 2025-04-14 00:59:56.150853 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.150860 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.150867 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.150874 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.150881 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.150888 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.150895 | orchestrator | 2025-04-14 00:59:56.150901 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-04-14 00:59:56.150908 | orchestrator | Monday 14 April 2025 00:49:41 +0000 (0:00:01.042) 0:03:28.625 ********** 2025-04-14 00:59:56.150915 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.150922 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.150929 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.150936 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.150943 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.150950 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.150957 | orchestrator | 2025-04-14 00:59:56.150964 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-14 00:59:56.150971 | orchestrator | Monday 14 April 2025 00:49:43 +0000 (0:00:01.536) 0:03:30.162 ********** 2025-04-14 00:59:56.150978 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.150985 | orchestrator | 2025-04-14 00:59:56.150992 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-04-14 00:59:56.150999 | orchestrator | Monday 14 April 2025 00:49:44 +0000 (0:00:01.396) 0:03:31.558 ********** 2025-04-14 00:59:56.151006 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-04-14 00:59:56.151013 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-04-14 00:59:56.151020 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-04-14 00:59:56.151026 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-04-14 00:59:56.151044 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-04-14 00:59:56.151052 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-04-14 00:59:56.151059 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-04-14 00:59:56.151070 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-04-14 00:59:56.151127 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-04-14 00:59:56.151138 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-04-14 00:59:56.151145 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-04-14 00:59:56.151152 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-04-14 00:59:56.151159 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-04-14 00:59:56.151166 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-04-14 00:59:56.151173 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-04-14 00:59:56.151180 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-04-14 00:59:56.151187 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-04-14 00:59:56.151194 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-04-14 00:59:56.151201 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-04-14 00:59:56.151208 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-04-14 00:59:56.151215 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-04-14 00:59:56.151222 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-04-14 00:59:56.151229 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-04-14 00:59:56.151235 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-04-14 00:59:56.151242 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-04-14 00:59:56.151249 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-04-14 00:59:56.151256 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-04-14 00:59:56.151267 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-04-14 00:59:56.151274 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-04-14 00:59:56.151281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-04-14 00:59:56.151288 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-04-14 00:59:56.151295 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-04-14 00:59:56.151302 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-04-14 00:59:56.151309 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-04-14 00:59:56.151316 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-04-14 00:59:56.151323 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-04-14 00:59:56.151333 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-04-14 00:59:56.151340 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-04-14 00:59:56.151347 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-04-14 00:59:56.151354 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-04-14 00:59:56.151361 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-14 00:59:56.151368 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-14 00:59:56.151375 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-04-14 00:59:56.151382 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-04-14 00:59:56.151389 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-14 00:59:56.151396 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-14 00:59:56.151402 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-14 00:59:56.151409 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-14 00:59:56.151416 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-14 00:59:56.151429 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-04-14 00:59:56.151436 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-14 00:59:56.151443 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-14 00:59:56.151449 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-14 00:59:56.151456 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-14 00:59:56.151463 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-14 00:59:56.151470 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-04-14 00:59:56.151477 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-14 00:59:56.151484 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-14 00:59:56.151491 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-14 00:59:56.151498 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-14 00:59:56.151504 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-14 00:59:56.151511 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-04-14 00:59:56.151518 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-14 00:59:56.151525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-14 00:59:56.151532 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-14 00:59:56.151539 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-14 00:59:56.151546 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-14 00:59:56.151590 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-04-14 00:59:56.151599 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-14 00:59:56.151606 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-14 00:59:56.151613 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-14 00:59:56.151620 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-14 00:59:56.151627 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-14 00:59:56.151634 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-04-14 00:59:56.151641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-14 00:59:56.151648 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-14 00:59:56.151655 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-04-14 00:59:56.151662 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-04-14 00:59:56.151669 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-14 00:59:56.151676 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-04-14 00:59:56.151683 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-04-14 00:59:56.151690 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-04-14 00:59:56.151697 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-04-14 00:59:56.151703 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-04-14 00:59:56.151710 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-04-14 00:59:56.151717 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-04-14 00:59:56.151724 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-04-14 00:59:56.151731 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-04-14 00:59:56.151738 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-04-14 00:59:56.151745 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-04-14 00:59:56.151760 | orchestrator | 2025-04-14 00:59:56.151767 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-14 00:59:56.151777 | orchestrator | Monday 14 April 2025 00:49:50 +0000 (0:00:05.666) 0:03:37.225 ********** 2025-04-14 00:59:56.151785 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.151792 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.151799 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.151806 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.151814 | orchestrator | 2025-04-14 00:59:56.151821 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-04-14 00:59:56.151827 | orchestrator | Monday 14 April 2025 00:49:51 +0000 (0:00:01.516) 0:03:38.741 ********** 2025-04-14 00:59:56.151834 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.151841 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.151848 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.151855 | orchestrator | 2025-04-14 00:59:56.151862 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-04-14 00:59:56.151869 | orchestrator | Monday 14 April 2025 00:49:53 +0000 (0:00:01.307) 0:03:40.048 ********** 2025-04-14 00:59:56.151876 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.151883 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.151890 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.151897 | orchestrator | 2025-04-14 00:59:56.151904 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-14 00:59:56.151911 | orchestrator | Monday 14 April 2025 00:49:54 +0000 (0:00:01.223) 0:03:41.272 ********** 2025-04-14 00:59:56.151918 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.151925 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.151932 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.151939 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.151947 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.151954 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.151961 | orchestrator | 2025-04-14 00:59:56.151967 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-14 00:59:56.151974 | orchestrator | Monday 14 April 2025 00:49:55 +0000 (0:00:01.072) 0:03:42.345 ********** 2025-04-14 00:59:56.151981 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.151988 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.151995 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152002 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.152009 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.152016 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.152023 | orchestrator | 2025-04-14 00:59:56.152030 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-14 00:59:56.152049 | orchestrator | Monday 14 April 2025 00:49:56 +0000 (0:00:00.755) 0:03:43.101 ********** 2025-04-14 00:59:56.152057 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152116 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152128 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152136 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.152144 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.152152 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.152165 | orchestrator | 2025-04-14 00:59:56.152173 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-14 00:59:56.152181 | orchestrator | Monday 14 April 2025 00:49:57 +0000 (0:00:00.937) 0:03:44.038 ********** 2025-04-14 00:59:56.152189 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152197 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152204 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152212 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.152220 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.152228 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.152236 | orchestrator | 2025-04-14 00:59:56.152243 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-14 00:59:56.152251 | orchestrator | Monday 14 April 2025 00:49:57 +0000 (0:00:00.756) 0:03:44.795 ********** 2025-04-14 00:59:56.152259 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152267 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152274 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152282 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.152290 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.152298 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.152305 | orchestrator | 2025-04-14 00:59:56.152313 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-14 00:59:56.152321 | orchestrator | Monday 14 April 2025 00:49:58 +0000 (0:00:00.967) 0:03:45.763 ********** 2025-04-14 00:59:56.152329 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152337 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152344 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152352 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.152360 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.152368 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.152380 | orchestrator | 2025-04-14 00:59:56.152387 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-14 00:59:56.152395 | orchestrator | Monday 14 April 2025 00:49:59 +0000 (0:00:00.673) 0:03:46.437 ********** 2025-04-14 00:59:56.152403 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152411 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152418 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152426 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.152433 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.152441 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.152448 | orchestrator | 2025-04-14 00:59:56.152456 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-14 00:59:56.152463 | orchestrator | Monday 14 April 2025 00:50:00 +0000 (0:00:00.927) 0:03:47.364 ********** 2025-04-14 00:59:56.152470 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152478 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152485 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152493 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.152500 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.152508 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.152515 | orchestrator | 2025-04-14 00:59:56.152523 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-14 00:59:56.152530 | orchestrator | Monday 14 April 2025 00:50:01 +0000 (0:00:00.693) 0:03:48.058 ********** 2025-04-14 00:59:56.152538 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152545 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152553 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152560 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.152567 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.152575 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.152582 | orchestrator | 2025-04-14 00:59:56.152590 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-14 00:59:56.152601 | orchestrator | Monday 14 April 2025 00:50:03 +0000 (0:00:02.339) 0:03:50.397 ********** 2025-04-14 00:59:56.152609 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152616 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152624 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152631 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.152638 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.152646 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.152653 | orchestrator | 2025-04-14 00:59:56.152661 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-14 00:59:56.152668 | orchestrator | Monday 14 April 2025 00:50:04 +0000 (0:00:00.701) 0:03:51.099 ********** 2025-04-14 00:59:56.152676 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.152683 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.152690 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152698 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.152709 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.152716 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152724 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.152731 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.152739 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152746 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.152755 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.152763 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.152771 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.152779 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.152787 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.152795 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.152804 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.152812 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.152820 | orchestrator | 2025-04-14 00:59:56.152828 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-14 00:59:56.152876 | orchestrator | Monday 14 April 2025 00:50:05 +0000 (0:00:01.089) 0:03:52.189 ********** 2025-04-14 00:59:56.152886 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-14 00:59:56.152898 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-14 00:59:56.152906 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.152915 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-14 00:59:56.152923 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-14 00:59:56.152931 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.152940 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-14 00:59:56.152949 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-14 00:59:56.152957 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.152966 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-04-14 00:59:56.152974 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-04-14 00:59:56.152983 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-04-14 00:59:56.152991 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-04-14 00:59:56.152999 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-04-14 00:59:56.153007 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-04-14 00:59:56.153015 | orchestrator | 2025-04-14 00:59:56.153024 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-14 00:59:56.153032 | orchestrator | Monday 14 April 2025 00:50:06 +0000 (0:00:00.751) 0:03:52.940 ********** 2025-04-14 00:59:56.153050 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153058 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153066 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153078 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.153086 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.153094 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.153102 | orchestrator | 2025-04-14 00:59:56.153110 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-14 00:59:56.153117 | orchestrator | Monday 14 April 2025 00:50:07 +0000 (0:00:01.388) 0:03:54.329 ********** 2025-04-14 00:59:56.153124 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153131 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153137 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153144 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.153151 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.153158 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.153165 | orchestrator | 2025-04-14 00:59:56.153172 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.153179 | orchestrator | Monday 14 April 2025 00:50:08 +0000 (0:00:00.948) 0:03:55.278 ********** 2025-04-14 00:59:56.153186 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153192 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153199 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153206 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.153213 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.153223 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.153230 | orchestrator | 2025-04-14 00:59:56.153237 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.153244 | orchestrator | Monday 14 April 2025 00:50:09 +0000 (0:00:01.119) 0:03:56.397 ********** 2025-04-14 00:59:56.153251 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153258 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153265 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153272 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.153278 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.153285 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.153292 | orchestrator | 2025-04-14 00:59:56.153302 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.153309 | orchestrator | Monday 14 April 2025 00:50:10 +0000 (0:00:00.743) 0:03:57.140 ********** 2025-04-14 00:59:56.153316 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153323 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153338 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153346 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.153353 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.153359 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.153366 | orchestrator | 2025-04-14 00:59:56.153373 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.153380 | orchestrator | Monday 14 April 2025 00:50:11 +0000 (0:00:01.059) 0:03:58.200 ********** 2025-04-14 00:59:56.153387 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153394 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153401 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153408 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.153415 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.153421 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.153428 | orchestrator | 2025-04-14 00:59:56.153435 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.153442 | orchestrator | Monday 14 April 2025 00:50:12 +0000 (0:00:00.748) 0:03:58.949 ********** 2025-04-14 00:59:56.153449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.153456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.153463 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.153470 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153481 | orchestrator | 2025-04-14 00:59:56.153488 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.153495 | orchestrator | Monday 14 April 2025 00:50:13 +0000 (0:00:00.942) 0:03:59.892 ********** 2025-04-14 00:59:56.153502 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.153509 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.153516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.153523 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153530 | orchestrator | 2025-04-14 00:59:56.153577 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.153588 | orchestrator | Monday 14 April 2025 00:50:13 +0000 (0:00:00.430) 0:04:00.322 ********** 2025-04-14 00:59:56.153595 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.153602 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.153609 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.153616 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153623 | orchestrator | 2025-04-14 00:59:56.153630 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.153637 | orchestrator | Monday 14 April 2025 00:50:13 +0000 (0:00:00.431) 0:04:00.754 ********** 2025-04-14 00:59:56.153643 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153651 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153657 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153664 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.153671 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.153678 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.153685 | orchestrator | 2025-04-14 00:59:56.153692 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.153699 | orchestrator | Monday 14 April 2025 00:50:14 +0000 (0:00:00.750) 0:04:01.505 ********** 2025-04-14 00:59:56.153706 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.153713 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153720 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.153727 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.153734 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153741 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153748 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-14 00:59:56.153755 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-14 00:59:56.153762 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-14 00:59:56.153769 | orchestrator | 2025-04-14 00:59:56.153776 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.153783 | orchestrator | Monday 14 April 2025 00:50:16 +0000 (0:00:01.365) 0:04:02.870 ********** 2025-04-14 00:59:56.153790 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153797 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153804 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153811 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.153817 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.153824 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.153831 | orchestrator | 2025-04-14 00:59:56.153838 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.153845 | orchestrator | Monday 14 April 2025 00:50:16 +0000 (0:00:00.713) 0:04:03.583 ********** 2025-04-14 00:59:56.153852 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153859 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153866 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153873 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.153880 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.153887 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.153894 | orchestrator | 2025-04-14 00:59:56.153905 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.153912 | orchestrator | Monday 14 April 2025 00:50:17 +0000 (0:00:00.980) 0:04:04.564 ********** 2025-04-14 00:59:56.153919 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.153926 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.153933 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.153940 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.153947 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.153954 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.153961 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.153968 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.153975 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.153982 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.153992 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.153999 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.154006 | orchestrator | 2025-04-14 00:59:56.154013 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.154079 | orchestrator | Monday 14 April 2025 00:50:18 +0000 (0:00:01.146) 0:04:05.710 ********** 2025-04-14 00:59:56.154087 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.154094 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.154101 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.154108 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.154115 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.154122 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.154129 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.154136 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.154143 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.154150 | orchestrator | 2025-04-14 00:59:56.154157 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.154164 | orchestrator | Monday 14 April 2025 00:50:19 +0000 (0:00:00.950) 0:04:06.660 ********** 2025-04-14 00:59:56.154171 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.154179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.154186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.154192 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.154200 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-14 00:59:56.154250 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-14 00:59:56.154261 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-14 00:59:56.154269 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.154277 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-14 00:59:56.154284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-14 00:59:56.154292 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-14 00:59:56.154300 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.154308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.154315 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 00:59:56.154323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.154330 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 00:59:56.154338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.154345 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.154359 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 00:59:56.154366 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 00:59:56.154374 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.154382 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 00:59:56.154390 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 00:59:56.154397 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.154404 | orchestrator | 2025-04-14 00:59:56.154412 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-14 00:59:56.154420 | orchestrator | Monday 14 April 2025 00:50:21 +0000 (0:00:01.815) 0:04:08.476 ********** 2025-04-14 00:59:56.154428 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.154435 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.154443 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.154450 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.154458 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.154466 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.154473 | orchestrator | 2025-04-14 00:59:56.154481 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-14 00:59:56.154488 | orchestrator | Monday 14 April 2025 00:50:26 +0000 (0:00:05.308) 0:04:13.785 ********** 2025-04-14 00:59:56.154496 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.154503 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.154511 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.154518 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.154526 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.154534 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.154542 | orchestrator | 2025-04-14 00:59:56.154550 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-14 00:59:56.154558 | orchestrator | Monday 14 April 2025 00:50:27 +0000 (0:00:01.059) 0:04:14.844 ********** 2025-04-14 00:59:56.154565 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.154573 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.154580 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.154587 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.154594 | orchestrator | 2025-04-14 00:59:56.154601 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-14 00:59:56.154607 | orchestrator | Monday 14 April 2025 00:50:29 +0000 (0:00:01.184) 0:04:16.029 ********** 2025-04-14 00:59:56.154613 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.154620 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.154626 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.154632 | orchestrator | 2025-04-14 00:59:56.154642 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-04-14 00:59:56.154649 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.154655 | orchestrator | 2025-04-14 00:59:56.154661 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-14 00:59:56.154667 | orchestrator | Monday 14 April 2025 00:50:30 +0000 (0:00:01.277) 0:04:17.306 ********** 2025-04-14 00:59:56.154673 | orchestrator | 2025-04-14 00:59:56.154680 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-04-14 00:59:56.154686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.154692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.154698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.154704 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.154711 | orchestrator | 2025-04-14 00:59:56.154717 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-14 00:59:56.154723 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.154729 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.154741 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.154747 | orchestrator | 2025-04-14 00:59:56.154754 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-14 00:59:56.154760 | orchestrator | Monday 14 April 2025 00:50:31 +0000 (0:00:01.349) 0:04:18.655 ********** 2025-04-14 00:59:56.154766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.154776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.154782 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.154788 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.154794 | orchestrator | 2025-04-14 00:59:56.154801 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-14 00:59:56.154816 | orchestrator | Monday 14 April 2025 00:50:32 +0000 (0:00:01.189) 0:04:19.845 ********** 2025-04-14 00:59:56.154823 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.154829 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.154835 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.154841 | orchestrator | 2025-04-14 00:59:56.154848 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-04-14 00:59:56.154891 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.154900 | orchestrator | 2025-04-14 00:59:56.154906 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-14 00:59:56.154912 | orchestrator | Monday 14 April 2025 00:50:33 +0000 (0:00:00.617) 0:04:20.462 ********** 2025-04-14 00:59:56.154918 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.154925 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.154931 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.154937 | orchestrator | 2025-04-14 00:59:56.154943 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-04-14 00:59:56.154949 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.154955 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.154962 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.154968 | orchestrator | 2025-04-14 00:59:56.154974 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-14 00:59:56.154980 | orchestrator | Monday 14 April 2025 00:50:34 +0000 (0:00:00.674) 0:04:21.137 ********** 2025-04-14 00:59:56.154986 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.154992 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.154999 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.155005 | orchestrator | 2025-04-14 00:59:56.155011 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-04-14 00:59:56.155017 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.155023 | orchestrator | 2025-04-14 00:59:56.155030 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-14 00:59:56.155048 | orchestrator | Monday 14 April 2025 00:50:35 +0000 (0:00:00.819) 0:04:21.957 ********** 2025-04-14 00:59:56.155055 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.155064 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.155071 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.155077 | orchestrator | 2025-04-14 00:59:56.155083 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-04-14 00:59:56.155089 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.155095 | orchestrator | 2025-04-14 00:59:56.155101 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-14 00:59:56.155107 | orchestrator | Monday 14 April 2025 00:50:35 +0000 (0:00:00.828) 0:04:22.786 ********** 2025-04-14 00:59:56.155113 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.155120 | orchestrator | 2025-04-14 00:59:56.155126 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-14 00:59:56.155132 | orchestrator | Monday 14 April 2025 00:50:36 +0000 (0:00:00.150) 0:04:22.937 ********** 2025-04-14 00:59:56.155138 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.155144 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.155155 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.155161 | orchestrator | 2025-04-14 00:59:56.155167 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-04-14 00:59:56.155173 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.155179 | orchestrator | 2025-04-14 00:59:56.155186 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-14 00:59:56.155192 | orchestrator | Monday 14 April 2025 00:50:36 +0000 (0:00:00.813) 0:04:23.750 ********** 2025-04-14 00:59:56.155198 | orchestrator | 2025-04-14 00:59:56.155204 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-04-14 00:59:56.155210 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.155217 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.155223 | orchestrator | 2025-04-14 00:59:56.155229 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-14 00:59:56.155235 | orchestrator | Monday 14 April 2025 00:50:38 +0000 (0:00:01.176) 0:04:24.926 ********** 2025-04-14 00:59:56.155241 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.155248 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.155254 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.155260 | orchestrator | 2025-04-14 00:59:56.155266 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-04-14 00:59:56.155272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.155278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.155284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.155290 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.155297 | orchestrator | 2025-04-14 00:59:56.155303 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-14 00:59:56.155312 | orchestrator | Monday 14 April 2025 00:50:39 +0000 (0:00:01.246) 0:04:26.172 ********** 2025-04-14 00:59:56.155318 | orchestrator | 2025-04-14 00:59:56.155324 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-04-14 00:59:56.155331 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.155342 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.155352 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.155361 | orchestrator | 2025-04-14 00:59:56.155371 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-14 00:59:56.155380 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.155389 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.155399 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.155408 | orchestrator | 2025-04-14 00:59:56.155418 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-14 00:59:56.155428 | orchestrator | Monday 14 April 2025 00:50:40 +0000 (0:00:01.280) 0:04:27.453 ********** 2025-04-14 00:59:56.155438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.155448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.155458 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.155465 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.155476 | orchestrator | 2025-04-14 00:59:56.155486 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-14 00:59:56.155496 | orchestrator | Monday 14 April 2025 00:50:41 +0000 (0:00:00.981) 0:04:28.435 ********** 2025-04-14 00:59:56.155505 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.155516 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.155527 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.155537 | orchestrator | 2025-04-14 00:59:56.155606 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-04-14 00:59:56.155622 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.155634 | orchestrator | 2025-04-14 00:59:56.155643 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-14 00:59:56.155660 | orchestrator | Monday 14 April 2025 00:50:42 +0000 (0:00:01.030) 0:04:29.465 ********** 2025-04-14 00:59:56.155672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.155682 | orchestrator | 2025-04-14 00:59:56.155692 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-14 00:59:56.155702 | orchestrator | Monday 14 April 2025 00:50:43 +0000 (0:00:00.632) 0:04:30.098 ********** 2025-04-14 00:59:56.155712 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.155722 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.155731 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.155741 | orchestrator | 2025-04-14 00:59:56.155752 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-04-14 00:59:56.155762 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.155772 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.155782 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.155792 | orchestrator | 2025-04-14 00:59:56.155803 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-14 00:59:56.155814 | orchestrator | Monday 14 April 2025 00:50:44 +0000 (0:00:01.122) 0:04:31.220 ********** 2025-04-14 00:59:56.155824 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.155834 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.155845 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.155853 | orchestrator | 2025-04-14 00:59:56.155863 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-14 00:59:56.155874 | orchestrator | Monday 14 April 2025 00:50:45 +0000 (0:00:01.433) 0:04:32.654 ********** 2025-04-14 00:59:56.155884 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.155894 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.155904 | orchestrator | 2025-04-14 00:59:56.155914 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-14 00:59:56.155924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.155934 | orchestrator | 2025-04-14 00:59:56.155944 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-14 00:59:56.155954 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.156016 | orchestrator | 2025-04-14 00:59:56.156027 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-04-14 00:59:56.156097 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.156111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.156122 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.156133 | orchestrator | 2025-04-14 00:59:56.156145 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-14 00:59:56.156156 | orchestrator | Monday 14 April 2025 00:50:47 +0000 (0:00:01.334) 0:04:33.988 ********** 2025-04-14 00:59:56.156168 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.156179 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.156190 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.156201 | orchestrator | 2025-04-14 00:59:56.156212 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-14 00:59:56.156223 | orchestrator | Monday 14 April 2025 00:50:48 +0000 (0:00:01.034) 0:04:35.022 ********** 2025-04-14 00:59:56.156235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.156247 | orchestrator | 2025-04-14 00:59:56.156257 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-14 00:59:56.156267 | orchestrator | Monday 14 April 2025 00:50:48 +0000 (0:00:00.603) 0:04:35.626 ********** 2025-04-14 00:59:56.156277 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.156288 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.156298 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.156308 | orchestrator | 2025-04-14 00:59:56.156319 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-14 00:59:56.156337 | orchestrator | Monday 14 April 2025 00:50:49 +0000 (0:00:00.665) 0:04:36.291 ********** 2025-04-14 00:59:56.156347 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.156358 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.156368 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.156378 | orchestrator | 2025-04-14 00:59:56.156388 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-14 00:59:56.156398 | orchestrator | Monday 14 April 2025 00:50:50 +0000 (0:00:01.261) 0:04:37.553 ********** 2025-04-14 00:59:56.156409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.156420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.156430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.156441 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.156451 | orchestrator | 2025-04-14 00:59:56.156462 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-14 00:59:56.156472 | orchestrator | Monday 14 April 2025 00:50:51 +0000 (0:00:00.683) 0:04:38.237 ********** 2025-04-14 00:59:56.156482 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.156492 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.156503 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.156513 | orchestrator | 2025-04-14 00:59:56.156529 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-04-14 00:59:56.156540 | orchestrator | Monday 14 April 2025 00:50:51 +0000 (0:00:00.308) 0:04:38.546 ********** 2025-04-14 00:59:56.156549 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.156559 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.156572 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.156582 | orchestrator | 2025-04-14 00:59:56.156593 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-14 00:59:56.156603 | orchestrator | Monday 14 April 2025 00:50:52 +0000 (0:00:00.565) 0:04:39.111 ********** 2025-04-14 00:59:56.156614 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.156625 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.156712 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.156724 | orchestrator | 2025-04-14 00:59:56.156730 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-04-14 00:59:56.156736 | orchestrator | Monday 14 April 2025 00:50:52 +0000 (0:00:00.347) 0:04:39.458 ********** 2025-04-14 00:59:56.156742 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.156748 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.156754 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.156759 | orchestrator | 2025-04-14 00:59:56.156765 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-14 00:59:56.156771 | orchestrator | Monday 14 April 2025 00:50:52 +0000 (0:00:00.347) 0:04:39.806 ********** 2025-04-14 00:59:56.156777 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.156783 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.156789 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.156795 | orchestrator | 2025-04-14 00:59:56.156801 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-04-14 00:59:56.156807 | orchestrator | 2025-04-14 00:59:56.156813 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-14 00:59:56.156819 | orchestrator | Monday 14 April 2025 00:50:55 +0000 (0:00:02.445) 0:04:42.252 ********** 2025-04-14 00:59:56.156825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.156831 | orchestrator | 2025-04-14 00:59:56.156837 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-14 00:59:56.156843 | orchestrator | Monday 14 April 2025 00:50:55 +0000 (0:00:00.564) 0:04:42.817 ********** 2025-04-14 00:59:56.156848 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.156854 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.156866 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.156872 | orchestrator | 2025-04-14 00:59:56.156878 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-14 00:59:56.156884 | orchestrator | Monday 14 April 2025 00:50:56 +0000 (0:00:00.763) 0:04:43.581 ********** 2025-04-14 00:59:56.156890 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.156896 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.156902 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.156907 | orchestrator | 2025-04-14 00:59:56.156913 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-14 00:59:56.156919 | orchestrator | Monday 14 April 2025 00:50:57 +0000 (0:00:00.849) 0:04:44.431 ********** 2025-04-14 00:59:56.156925 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.156931 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.156937 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.156943 | orchestrator | 2025-04-14 00:59:56.156948 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-14 00:59:56.156954 | orchestrator | Monday 14 April 2025 00:50:57 +0000 (0:00:00.413) 0:04:44.845 ********** 2025-04-14 00:59:56.156960 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.156966 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.156972 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.156978 | orchestrator | 2025-04-14 00:59:56.156984 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-14 00:59:56.156990 | orchestrator | Monday 14 April 2025 00:50:58 +0000 (0:00:00.427) 0:04:45.272 ********** 2025-04-14 00:59:56.156995 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.157001 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.157007 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.157013 | orchestrator | 2025-04-14 00:59:56.157019 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-14 00:59:56.157028 | orchestrator | Monday 14 April 2025 00:50:59 +0000 (0:00:00.767) 0:04:46.039 ********** 2025-04-14 00:59:56.157054 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157063 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157073 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157082 | orchestrator | 2025-04-14 00:59:56.157091 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-14 00:59:56.157102 | orchestrator | Monday 14 April 2025 00:50:59 +0000 (0:00:00.610) 0:04:46.650 ********** 2025-04-14 00:59:56.157109 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157115 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157120 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157126 | orchestrator | 2025-04-14 00:59:56.157132 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-14 00:59:56.157138 | orchestrator | Monday 14 April 2025 00:51:00 +0000 (0:00:00.401) 0:04:47.051 ********** 2025-04-14 00:59:56.157144 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157149 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157155 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157161 | orchestrator | 2025-04-14 00:59:56.157167 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-14 00:59:56.157173 | orchestrator | Monday 14 April 2025 00:51:00 +0000 (0:00:00.367) 0:04:47.419 ********** 2025-04-14 00:59:56.157178 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157184 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157190 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157196 | orchestrator | 2025-04-14 00:59:56.157202 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-14 00:59:56.157208 | orchestrator | Monday 14 April 2025 00:51:00 +0000 (0:00:00.364) 0:04:47.784 ********** 2025-04-14 00:59:56.157213 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157219 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157225 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157235 | orchestrator | 2025-04-14 00:59:56.157241 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-14 00:59:56.157251 | orchestrator | Monday 14 April 2025 00:51:01 +0000 (0:00:00.734) 0:04:48.518 ********** 2025-04-14 00:59:56.157257 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.157263 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.157269 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.157274 | orchestrator | 2025-04-14 00:59:56.157280 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-14 00:59:56.157329 | orchestrator | Monday 14 April 2025 00:51:02 +0000 (0:00:00.816) 0:04:49.335 ********** 2025-04-14 00:59:56.157337 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157343 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157349 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157355 | orchestrator | 2025-04-14 00:59:56.157361 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-14 00:59:56.157367 | orchestrator | Monday 14 April 2025 00:51:02 +0000 (0:00:00.382) 0:04:49.717 ********** 2025-04-14 00:59:56.157373 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.157378 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.157384 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.157390 | orchestrator | 2025-04-14 00:59:56.157396 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-14 00:59:56.157402 | orchestrator | Monday 14 April 2025 00:51:03 +0000 (0:00:00.514) 0:04:50.231 ********** 2025-04-14 00:59:56.157408 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157417 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157423 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157429 | orchestrator | 2025-04-14 00:59:56.157435 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-14 00:59:56.157441 | orchestrator | Monday 14 April 2025 00:51:04 +0000 (0:00:01.262) 0:04:51.494 ********** 2025-04-14 00:59:56.157447 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157453 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157459 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157464 | orchestrator | 2025-04-14 00:59:56.157470 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-14 00:59:56.157476 | orchestrator | Monday 14 April 2025 00:51:05 +0000 (0:00:00.546) 0:04:52.041 ********** 2025-04-14 00:59:56.157482 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157488 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157494 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157500 | orchestrator | 2025-04-14 00:59:56.157506 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-14 00:59:56.157511 | orchestrator | Monday 14 April 2025 00:51:05 +0000 (0:00:00.414) 0:04:52.456 ********** 2025-04-14 00:59:56.157517 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157523 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157529 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157535 | orchestrator | 2025-04-14 00:59:56.157541 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-14 00:59:56.157547 | orchestrator | Monday 14 April 2025 00:51:05 +0000 (0:00:00.352) 0:04:52.808 ********** 2025-04-14 00:59:56.157553 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157559 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157565 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157570 | orchestrator | 2025-04-14 00:59:56.157576 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-14 00:59:56.157582 | orchestrator | Monday 14 April 2025 00:51:06 +0000 (0:00:00.602) 0:04:53.411 ********** 2025-04-14 00:59:56.157588 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.157594 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.157600 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.157606 | orchestrator | 2025-04-14 00:59:56.157611 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-14 00:59:56.157621 | orchestrator | Monday 14 April 2025 00:51:06 +0000 (0:00:00.348) 0:04:53.759 ********** 2025-04-14 00:59:56.157627 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.157633 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.157639 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.157645 | orchestrator | 2025-04-14 00:59:56.157651 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-14 00:59:56.157657 | orchestrator | Monday 14 April 2025 00:51:07 +0000 (0:00:00.419) 0:04:54.178 ********** 2025-04-14 00:59:56.157663 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157669 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157675 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157681 | orchestrator | 2025-04-14 00:59:56.157687 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-14 00:59:56.157693 | orchestrator | Monday 14 April 2025 00:51:07 +0000 (0:00:00.339) 0:04:54.518 ********** 2025-04-14 00:59:56.157699 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157704 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157710 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157716 | orchestrator | 2025-04-14 00:59:56.157722 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-14 00:59:56.157728 | orchestrator | Monday 14 April 2025 00:51:08 +0000 (0:00:00.657) 0:04:55.175 ********** 2025-04-14 00:59:56.157734 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157740 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157746 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157751 | orchestrator | 2025-04-14 00:59:56.157757 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-14 00:59:56.157763 | orchestrator | Monday 14 April 2025 00:51:08 +0000 (0:00:00.456) 0:04:55.632 ********** 2025-04-14 00:59:56.157769 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157775 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157781 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157787 | orchestrator | 2025-04-14 00:59:56.157793 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-14 00:59:56.157799 | orchestrator | Monday 14 April 2025 00:51:09 +0000 (0:00:00.356) 0:04:55.988 ********** 2025-04-14 00:59:56.157804 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157810 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157816 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157822 | orchestrator | 2025-04-14 00:59:56.157828 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-14 00:59:56.157834 | orchestrator | Monday 14 April 2025 00:51:09 +0000 (0:00:00.666) 0:04:56.655 ********** 2025-04-14 00:59:56.157839 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157845 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157869 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157875 | orchestrator | 2025-04-14 00:59:56.157881 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-14 00:59:56.157927 | orchestrator | Monday 14 April 2025 00:51:10 +0000 (0:00:00.336) 0:04:56.992 ********** 2025-04-14 00:59:56.157936 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157942 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157948 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157954 | orchestrator | 2025-04-14 00:59:56.157960 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-14 00:59:56.157966 | orchestrator | Monday 14 April 2025 00:51:10 +0000 (0:00:00.464) 0:04:57.457 ********** 2025-04-14 00:59:56.157972 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.157978 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.157984 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.157990 | orchestrator | 2025-04-14 00:59:56.157996 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-14 00:59:56.158006 | orchestrator | Monday 14 April 2025 00:51:10 +0000 (0:00:00.386) 0:04:57.843 ********** 2025-04-14 00:59:56.158012 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158048 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158054 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158060 | orchestrator | 2025-04-14 00:59:56.158066 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-14 00:59:56.158072 | orchestrator | Monday 14 April 2025 00:51:11 +0000 (0:00:00.708) 0:04:58.551 ********** 2025-04-14 00:59:56.158078 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158084 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158090 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158096 | orchestrator | 2025-04-14 00:59:56.158102 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-14 00:59:56.158108 | orchestrator | Monday 14 April 2025 00:51:12 +0000 (0:00:00.373) 0:04:58.926 ********** 2025-04-14 00:59:56.158113 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158122 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158128 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158134 | orchestrator | 2025-04-14 00:59:56.158140 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-14 00:59:56.158146 | orchestrator | Monday 14 April 2025 00:51:12 +0000 (0:00:00.384) 0:04:59.310 ********** 2025-04-14 00:59:56.158152 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158158 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158164 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158170 | orchestrator | 2025-04-14 00:59:56.158175 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-14 00:59:56.158182 | orchestrator | Monday 14 April 2025 00:51:12 +0000 (0:00:00.380) 0:04:59.691 ********** 2025-04-14 00:59:56.158188 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.158193 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.158199 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158205 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.158211 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.158217 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158223 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.158229 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.158235 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158241 | orchestrator | 2025-04-14 00:59:56.158247 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-14 00:59:56.158252 | orchestrator | Monday 14 April 2025 00:51:13 +0000 (0:00:00.665) 0:05:00.356 ********** 2025-04-14 00:59:56.158258 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-14 00:59:56.158264 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-14 00:59:56.158270 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158276 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-14 00:59:56.158282 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-14 00:59:56.158288 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158294 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-14 00:59:56.158300 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-14 00:59:56.158305 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158311 | orchestrator | 2025-04-14 00:59:56.158317 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-14 00:59:56.158323 | orchestrator | Monday 14 April 2025 00:51:13 +0000 (0:00:00.382) 0:05:00.739 ********** 2025-04-14 00:59:56.158329 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158335 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158345 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158351 | orchestrator | 2025-04-14 00:59:56.158357 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-14 00:59:56.158363 | orchestrator | Monday 14 April 2025 00:51:14 +0000 (0:00:00.363) 0:05:01.103 ********** 2025-04-14 00:59:56.158369 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158374 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158380 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158386 | orchestrator | 2025-04-14 00:59:56.158392 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.158398 | orchestrator | Monday 14 April 2025 00:51:14 +0000 (0:00:00.364) 0:05:01.467 ********** 2025-04-14 00:59:56.158404 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158410 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158416 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158422 | orchestrator | 2025-04-14 00:59:56.158428 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.158434 | orchestrator | Monday 14 April 2025 00:51:15 +0000 (0:00:00.646) 0:05:02.113 ********** 2025-04-14 00:59:56.158440 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158446 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158451 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158457 | orchestrator | 2025-04-14 00:59:56.158498 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.158506 | orchestrator | Monday 14 April 2025 00:51:15 +0000 (0:00:00.379) 0:05:02.493 ********** 2025-04-14 00:59:56.158513 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158519 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158525 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158530 | orchestrator | 2025-04-14 00:59:56.158536 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.158542 | orchestrator | Monday 14 April 2025 00:51:16 +0000 (0:00:00.383) 0:05:02.876 ********** 2025-04-14 00:59:56.158548 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158554 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158560 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158566 | orchestrator | 2025-04-14 00:59:56.158572 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.158578 | orchestrator | Monday 14 April 2025 00:51:16 +0000 (0:00:00.341) 0:05:03.217 ********** 2025-04-14 00:59:56.158583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.158589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.158595 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.158601 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158607 | orchestrator | 2025-04-14 00:59:56.158613 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.158619 | orchestrator | Monday 14 April 2025 00:51:17 +0000 (0:00:00.771) 0:05:03.989 ********** 2025-04-14 00:59:56.158625 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.158631 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.158637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.158642 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158648 | orchestrator | 2025-04-14 00:59:56.158654 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.158660 | orchestrator | Monday 14 April 2025 00:51:18 +0000 (0:00:01.028) 0:05:05.018 ********** 2025-04-14 00:59:56.158666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.158672 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.158678 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.158688 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158694 | orchestrator | 2025-04-14 00:59:56.158700 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.158706 | orchestrator | Monday 14 April 2025 00:51:18 +0000 (0:00:00.470) 0:05:05.488 ********** 2025-04-14 00:59:56.158712 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158717 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158723 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158729 | orchestrator | 2025-04-14 00:59:56.158735 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.158744 | orchestrator | Monday 14 April 2025 00:51:19 +0000 (0:00:00.404) 0:05:05.893 ********** 2025-04-14 00:59:56.158750 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.158756 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158762 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.158768 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158774 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.158780 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158786 | orchestrator | 2025-04-14 00:59:56.158792 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.158797 | orchestrator | Monday 14 April 2025 00:51:19 +0000 (0:00:00.485) 0:05:06.378 ********** 2025-04-14 00:59:56.158803 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158809 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158815 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158821 | orchestrator | 2025-04-14 00:59:56.158827 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.158833 | orchestrator | Monday 14 April 2025 00:51:19 +0000 (0:00:00.341) 0:05:06.719 ********** 2025-04-14 00:59:56.158838 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158844 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158850 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158856 | orchestrator | 2025-04-14 00:59:56.158862 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.158868 | orchestrator | Monday 14 April 2025 00:51:20 +0000 (0:00:00.693) 0:05:07.412 ********** 2025-04-14 00:59:56.158874 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.158880 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158885 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.158891 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158897 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.158903 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158909 | orchestrator | 2025-04-14 00:59:56.158915 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.158921 | orchestrator | Monday 14 April 2025 00:51:21 +0000 (0:00:00.707) 0:05:08.119 ********** 2025-04-14 00:59:56.158927 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.158932 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.158938 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.158944 | orchestrator | 2025-04-14 00:59:56.158950 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.158956 | orchestrator | Monday 14 April 2025 00:51:21 +0000 (0:00:00.395) 0:05:08.514 ********** 2025-04-14 00:59:56.158962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.158968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.158974 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.158979 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159013 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-14 00:59:56.159021 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-14 00:59:56.159030 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-14 00:59:56.159068 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159075 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-14 00:59:56.159084 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-14 00:59:56.159090 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-14 00:59:56.159096 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159102 | orchestrator | 2025-04-14 00:59:56.159108 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-14 00:59:56.159114 | orchestrator | Monday 14 April 2025 00:51:22 +0000 (0:00:01.072) 0:05:09.586 ********** 2025-04-14 00:59:56.159120 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159126 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159131 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159138 | orchestrator | 2025-04-14 00:59:56.159145 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-14 00:59:56.159151 | orchestrator | Monday 14 April 2025 00:51:23 +0000 (0:00:00.654) 0:05:10.241 ********** 2025-04-14 00:59:56.159157 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159163 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159170 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159176 | orchestrator | 2025-04-14 00:59:56.159183 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-14 00:59:56.159189 | orchestrator | Monday 14 April 2025 00:51:24 +0000 (0:00:00.915) 0:05:11.157 ********** 2025-04-14 00:59:56.159195 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159202 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159208 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159215 | orchestrator | 2025-04-14 00:59:56.159221 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-14 00:59:56.159228 | orchestrator | Monday 14 April 2025 00:51:24 +0000 (0:00:00.634) 0:05:11.791 ********** 2025-04-14 00:59:56.159234 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159241 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159247 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159254 | orchestrator | 2025-04-14 00:59:56.159260 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-04-14 00:59:56.159266 | orchestrator | Monday 14 April 2025 00:51:25 +0000 (0:00:00.949) 0:05:12.741 ********** 2025-04-14 00:59:56.159273 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.159279 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.159286 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.159292 | orchestrator | 2025-04-14 00:59:56.159299 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-04-14 00:59:56.159306 | orchestrator | Monday 14 April 2025 00:51:26 +0000 (0:00:00.384) 0:05:13.126 ********** 2025-04-14 00:59:56.159312 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.159319 | orchestrator | 2025-04-14 00:59:56.159325 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-04-14 00:59:56.159332 | orchestrator | Monday 14 April 2025 00:51:27 +0000 (0:00:00.832) 0:05:13.958 ********** 2025-04-14 00:59:56.159338 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159344 | orchestrator | 2025-04-14 00:59:56.159350 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-04-14 00:59:56.159356 | orchestrator | Monday 14 April 2025 00:51:27 +0000 (0:00:00.172) 0:05:14.131 ********** 2025-04-14 00:59:56.159362 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-04-14 00:59:56.159368 | orchestrator | 2025-04-14 00:59:56.159373 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-04-14 00:59:56.159379 | orchestrator | Monday 14 April 2025 00:51:28 +0000 (0:00:00.817) 0:05:14.948 ********** 2025-04-14 00:59:56.159385 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.159395 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.159401 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.159407 | orchestrator | 2025-04-14 00:59:56.159412 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-04-14 00:59:56.159418 | orchestrator | Monday 14 April 2025 00:51:28 +0000 (0:00:00.397) 0:05:15.346 ********** 2025-04-14 00:59:56.159424 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.159430 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.159436 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.159442 | orchestrator | 2025-04-14 00:59:56.159447 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-04-14 00:59:56.159456 | orchestrator | Monday 14 April 2025 00:51:28 +0000 (0:00:00.413) 0:05:15.760 ********** 2025-04-14 00:59:56.159462 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.159468 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.159477 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.159483 | orchestrator | 2025-04-14 00:59:56.159488 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-04-14 00:59:56.159494 | orchestrator | Monday 14 April 2025 00:51:30 +0000 (0:00:01.608) 0:05:17.368 ********** 2025-04-14 00:59:56.159500 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.159506 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.159512 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.159518 | orchestrator | 2025-04-14 00:59:56.159524 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-04-14 00:59:56.159530 | orchestrator | Monday 14 April 2025 00:51:31 +0000 (0:00:00.896) 0:05:18.265 ********** 2025-04-14 00:59:56.159535 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.159541 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.159547 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.159553 | orchestrator | 2025-04-14 00:59:56.159559 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-04-14 00:59:56.159565 | orchestrator | Monday 14 April 2025 00:51:32 +0000 (0:00:00.753) 0:05:19.018 ********** 2025-04-14 00:59:56.159571 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.159577 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.159583 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.159588 | orchestrator | 2025-04-14 00:59:56.159610 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-04-14 00:59:56.159617 | orchestrator | Monday 14 April 2025 00:51:32 +0000 (0:00:00.837) 0:05:19.856 ********** 2025-04-14 00:59:56.159623 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159628 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159634 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159639 | orchestrator | 2025-04-14 00:59:56.159644 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-04-14 00:59:56.159650 | orchestrator | Monday 14 April 2025 00:51:33 +0000 (0:00:00.731) 0:05:20.588 ********** 2025-04-14 00:59:56.159655 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.159661 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.159666 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.159671 | orchestrator | 2025-04-14 00:59:56.159676 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-04-14 00:59:56.159682 | orchestrator | Monday 14 April 2025 00:51:34 +0000 (0:00:00.430) 0:05:21.019 ********** 2025-04-14 00:59:56.159687 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159693 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159698 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159703 | orchestrator | 2025-04-14 00:59:56.159708 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-04-14 00:59:56.159716 | orchestrator | Monday 14 April 2025 00:51:34 +0000 (0:00:00.343) 0:05:21.363 ********** 2025-04-14 00:59:56.159725 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.159734 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.159742 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.159755 | orchestrator | 2025-04-14 00:59:56.159763 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-04-14 00:59:56.159771 | orchestrator | Monday 14 April 2025 00:51:34 +0000 (0:00:00.434) 0:05:21.797 ********** 2025-04-14 00:59:56.159780 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.159788 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.159797 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.159803 | orchestrator | 2025-04-14 00:59:56.159808 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-04-14 00:59:56.159814 | orchestrator | Monday 14 April 2025 00:51:36 +0000 (0:00:01.754) 0:05:23.551 ********** 2025-04-14 00:59:56.159819 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159828 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159834 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159841 | orchestrator | 2025-04-14 00:59:56.159850 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-04-14 00:59:56.159858 | orchestrator | Monday 14 April 2025 00:51:37 +0000 (0:00:00.381) 0:05:23.933 ********** 2025-04-14 00:59:56.159867 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.159876 | orchestrator | 2025-04-14 00:59:56.159885 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-04-14 00:59:56.159894 | orchestrator | Monday 14 April 2025 00:51:37 +0000 (0:00:00.876) 0:05:24.809 ********** 2025-04-14 00:59:56.159903 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159908 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159914 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159919 | orchestrator | 2025-04-14 00:59:56.159924 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-04-14 00:59:56.159929 | orchestrator | Monday 14 April 2025 00:51:38 +0000 (0:00:00.448) 0:05:25.258 ********** 2025-04-14 00:59:56.159935 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.159940 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.159945 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.159951 | orchestrator | 2025-04-14 00:59:56.159956 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-04-14 00:59:56.159961 | orchestrator | Monday 14 April 2025 00:51:38 +0000 (0:00:00.395) 0:05:25.653 ********** 2025-04-14 00:59:56.159966 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.159972 | orchestrator | 2025-04-14 00:59:56.159977 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-04-14 00:59:56.159982 | orchestrator | Monday 14 April 2025 00:51:39 +0000 (0:00:00.888) 0:05:26.542 ********** 2025-04-14 00:59:56.159987 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.159993 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.159998 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.160003 | orchestrator | 2025-04-14 00:59:56.160009 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-04-14 00:59:56.160014 | orchestrator | Monday 14 April 2025 00:51:40 +0000 (0:00:01.243) 0:05:27.786 ********** 2025-04-14 00:59:56.160019 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.160024 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.160030 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.160045 | orchestrator | 2025-04-14 00:59:56.160051 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-04-14 00:59:56.160060 | orchestrator | Monday 14 April 2025 00:51:42 +0000 (0:00:01.269) 0:05:29.055 ********** 2025-04-14 00:59:56.160065 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.160071 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.160077 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.160083 | orchestrator | 2025-04-14 00:59:56.160088 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-04-14 00:59:56.160094 | orchestrator | Monday 14 April 2025 00:51:43 +0000 (0:00:01.655) 0:05:30.710 ********** 2025-04-14 00:59:56.160131 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.160137 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.160143 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.160149 | orchestrator | 2025-04-14 00:59:56.160155 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-04-14 00:59:56.160160 | orchestrator | Monday 14 April 2025 00:51:46 +0000 (0:00:02.257) 0:05:32.968 ********** 2025-04-14 00:59:56.160166 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.160172 | orchestrator | 2025-04-14 00:59:56.160197 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-04-14 00:59:56.160204 | orchestrator | Monday 14 April 2025 00:51:46 +0000 (0:00:00.677) 0:05:33.645 ********** 2025-04-14 00:59:56.160210 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-04-14 00:59:56.160215 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.160221 | orchestrator | 2025-04-14 00:59:56.160226 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-04-14 00:59:56.160232 | orchestrator | Monday 14 April 2025 00:52:08 +0000 (0:00:21.447) 0:05:55.093 ********** 2025-04-14 00:59:56.160237 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.160242 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.160248 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.160253 | orchestrator | 2025-04-14 00:59:56.160258 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-04-14 00:59:56.160264 | orchestrator | Monday 14 April 2025 00:52:15 +0000 (0:00:07.403) 0:06:02.496 ********** 2025-04-14 00:59:56.160269 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160274 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160280 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160285 | orchestrator | 2025-04-14 00:59:56.160290 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-14 00:59:56.160296 | orchestrator | Monday 14 April 2025 00:52:16 +0000 (0:00:01.218) 0:06:03.714 ********** 2025-04-14 00:59:56.160301 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.160306 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.160312 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.160317 | orchestrator | 2025-04-14 00:59:56.160323 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-04-14 00:59:56.160328 | orchestrator | Monday 14 April 2025 00:52:17 +0000 (0:00:00.710) 0:06:04.425 ********** 2025-04-14 00:59:56.160333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.160339 | orchestrator | 2025-04-14 00:59:56.160344 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-04-14 00:59:56.160349 | orchestrator | Monday 14 April 2025 00:52:18 +0000 (0:00:00.809) 0:06:05.235 ********** 2025-04-14 00:59:56.160355 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.160360 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.160365 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.160371 | orchestrator | 2025-04-14 00:59:56.160376 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-04-14 00:59:56.160381 | orchestrator | Monday 14 April 2025 00:52:18 +0000 (0:00:00.367) 0:06:05.602 ********** 2025-04-14 00:59:56.160387 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.160392 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.160398 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.160403 | orchestrator | 2025-04-14 00:59:56.160408 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-04-14 00:59:56.160413 | orchestrator | Monday 14 April 2025 00:52:20 +0000 (0:00:01.280) 0:06:06.883 ********** 2025-04-14 00:59:56.160419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.160428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.160433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.160439 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160444 | orchestrator | 2025-04-14 00:59:56.160449 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-04-14 00:59:56.160455 | orchestrator | Monday 14 April 2025 00:52:21 +0000 (0:00:01.259) 0:06:08.142 ********** 2025-04-14 00:59:56.160460 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.160465 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.160471 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.160476 | orchestrator | 2025-04-14 00:59:56.160481 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-14 00:59:56.160486 | orchestrator | Monday 14 April 2025 00:52:21 +0000 (0:00:00.356) 0:06:08.499 ********** 2025-04-14 00:59:56.160492 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.160497 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.160502 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.160508 | orchestrator | 2025-04-14 00:59:56.160513 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-04-14 00:59:56.160518 | orchestrator | 2025-04-14 00:59:56.160524 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-14 00:59:56.160529 | orchestrator | Monday 14 April 2025 00:52:23 +0000 (0:00:02.235) 0:06:10.735 ********** 2025-04-14 00:59:56.160535 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.160540 | orchestrator | 2025-04-14 00:59:56.160545 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-14 00:59:56.160551 | orchestrator | Monday 14 April 2025 00:52:24 +0000 (0:00:00.941) 0:06:11.676 ********** 2025-04-14 00:59:56.160556 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.160561 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.160566 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.160572 | orchestrator | 2025-04-14 00:59:56.160577 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-14 00:59:56.160582 | orchestrator | Monday 14 April 2025 00:52:25 +0000 (0:00:00.851) 0:06:12.528 ********** 2025-04-14 00:59:56.160588 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160593 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160598 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160604 | orchestrator | 2025-04-14 00:59:56.160612 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-14 00:59:56.160617 | orchestrator | Monday 14 April 2025 00:52:26 +0000 (0:00:00.378) 0:06:12.906 ********** 2025-04-14 00:59:56.160622 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160630 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160635 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160641 | orchestrator | 2025-04-14 00:59:56.160659 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-14 00:59:56.160665 | orchestrator | Monday 14 April 2025 00:52:26 +0000 (0:00:00.598) 0:06:13.504 ********** 2025-04-14 00:59:56.160671 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160676 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160681 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160686 | orchestrator | 2025-04-14 00:59:56.160692 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-14 00:59:56.160697 | orchestrator | Monday 14 April 2025 00:52:26 +0000 (0:00:00.339) 0:06:13.844 ********** 2025-04-14 00:59:56.160702 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.160708 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.160713 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.160719 | orchestrator | 2025-04-14 00:59:56.160724 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-14 00:59:56.160729 | orchestrator | Monday 14 April 2025 00:52:27 +0000 (0:00:00.708) 0:06:14.553 ********** 2025-04-14 00:59:56.160738 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160743 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160749 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160754 | orchestrator | 2025-04-14 00:59:56.160759 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-14 00:59:56.160765 | orchestrator | Monday 14 April 2025 00:52:28 +0000 (0:00:00.341) 0:06:14.894 ********** 2025-04-14 00:59:56.160770 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160775 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160781 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160786 | orchestrator | 2025-04-14 00:59:56.160791 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-14 00:59:56.160796 | orchestrator | Monday 14 April 2025 00:52:28 +0000 (0:00:00.623) 0:06:15.518 ********** 2025-04-14 00:59:56.160802 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160807 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160812 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160818 | orchestrator | 2025-04-14 00:59:56.160823 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-14 00:59:56.160828 | orchestrator | Monday 14 April 2025 00:52:28 +0000 (0:00:00.339) 0:06:15.857 ********** 2025-04-14 00:59:56.160834 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160839 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160844 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160850 | orchestrator | 2025-04-14 00:59:56.160855 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-14 00:59:56.160860 | orchestrator | Monday 14 April 2025 00:52:29 +0000 (0:00:00.353) 0:06:16.210 ********** 2025-04-14 00:59:56.160866 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160871 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160876 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160882 | orchestrator | 2025-04-14 00:59:56.160887 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-14 00:59:56.160892 | orchestrator | Monday 14 April 2025 00:52:29 +0000 (0:00:00.352) 0:06:16.562 ********** 2025-04-14 00:59:56.160898 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.160903 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.160908 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.160913 | orchestrator | 2025-04-14 00:59:56.160919 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-14 00:59:56.160924 | orchestrator | Monday 14 April 2025 00:52:30 +0000 (0:00:01.064) 0:06:17.627 ********** 2025-04-14 00:59:56.160929 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160935 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.160940 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.160945 | orchestrator | 2025-04-14 00:59:56.160951 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-14 00:59:56.160956 | orchestrator | Monday 14 April 2025 00:52:31 +0000 (0:00:00.342) 0:06:17.969 ********** 2025-04-14 00:59:56.160961 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.160967 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.160972 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.160977 | orchestrator | 2025-04-14 00:59:56.160983 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-14 00:59:56.160988 | orchestrator | Monday 14 April 2025 00:52:31 +0000 (0:00:00.366) 0:06:18.336 ********** 2025-04-14 00:59:56.160993 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.160999 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161004 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161009 | orchestrator | 2025-04-14 00:59:56.161014 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-14 00:59:56.161020 | orchestrator | Monday 14 April 2025 00:52:32 +0000 (0:00:00.623) 0:06:18.960 ********** 2025-04-14 00:59:56.161028 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161115 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161123 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161128 | orchestrator | 2025-04-14 00:59:56.161134 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-14 00:59:56.161139 | orchestrator | Monday 14 April 2025 00:52:32 +0000 (0:00:00.354) 0:06:19.314 ********** 2025-04-14 00:59:56.161144 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161150 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161155 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161160 | orchestrator | 2025-04-14 00:59:56.161166 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-14 00:59:56.161171 | orchestrator | Monday 14 April 2025 00:52:32 +0000 (0:00:00.346) 0:06:19.661 ********** 2025-04-14 00:59:56.161176 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161182 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161187 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161192 | orchestrator | 2025-04-14 00:59:56.161197 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-14 00:59:56.161206 | orchestrator | Monday 14 April 2025 00:52:33 +0000 (0:00:00.345) 0:06:20.006 ********** 2025-04-14 00:59:56.161211 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161217 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161222 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161227 | orchestrator | 2025-04-14 00:59:56.161251 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-14 00:59:56.161257 | orchestrator | Monday 14 April 2025 00:52:33 +0000 (0:00:00.648) 0:06:20.654 ********** 2025-04-14 00:59:56.161262 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.161268 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.161273 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.161278 | orchestrator | 2025-04-14 00:59:56.161284 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-14 00:59:56.161289 | orchestrator | Monday 14 April 2025 00:52:34 +0000 (0:00:00.393) 0:06:21.048 ********** 2025-04-14 00:59:56.161295 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.161303 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.161308 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.161314 | orchestrator | 2025-04-14 00:59:56.161319 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-14 00:59:56.161325 | orchestrator | Monday 14 April 2025 00:52:34 +0000 (0:00:00.395) 0:06:21.443 ********** 2025-04-14 00:59:56.161330 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161335 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161340 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161346 | orchestrator | 2025-04-14 00:59:56.161351 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-14 00:59:56.161356 | orchestrator | Monday 14 April 2025 00:52:34 +0000 (0:00:00.380) 0:06:21.824 ********** 2025-04-14 00:59:56.161362 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161367 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161372 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161377 | orchestrator | 2025-04-14 00:59:56.161383 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-14 00:59:56.161388 | orchestrator | Monday 14 April 2025 00:52:35 +0000 (0:00:00.715) 0:06:22.540 ********** 2025-04-14 00:59:56.161393 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161398 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161404 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161409 | orchestrator | 2025-04-14 00:59:56.161415 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-14 00:59:56.161420 | orchestrator | Monday 14 April 2025 00:52:36 +0000 (0:00:00.381) 0:06:22.921 ********** 2025-04-14 00:59:56.161426 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161434 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161447 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161456 | orchestrator | 2025-04-14 00:59:56.161464 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-14 00:59:56.161472 | orchestrator | Monday 14 April 2025 00:52:36 +0000 (0:00:00.428) 0:06:23.350 ********** 2025-04-14 00:59:56.161480 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161488 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161497 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161505 | orchestrator | 2025-04-14 00:59:56.161513 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-14 00:59:56.161522 | orchestrator | Monday 14 April 2025 00:52:36 +0000 (0:00:00.365) 0:06:23.715 ********** 2025-04-14 00:59:56.161530 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161539 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161548 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161557 | orchestrator | 2025-04-14 00:59:56.161566 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-14 00:59:56.161574 | orchestrator | Monday 14 April 2025 00:52:37 +0000 (0:00:00.682) 0:06:24.397 ********** 2025-04-14 00:59:56.161582 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161589 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161597 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161605 | orchestrator | 2025-04-14 00:59:56.161613 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-14 00:59:56.161622 | orchestrator | Monday 14 April 2025 00:52:37 +0000 (0:00:00.399) 0:06:24.797 ********** 2025-04-14 00:59:56.161631 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161638 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161646 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161653 | orchestrator | 2025-04-14 00:59:56.161661 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-14 00:59:56.161668 | orchestrator | Monday 14 April 2025 00:52:38 +0000 (0:00:00.349) 0:06:25.146 ********** 2025-04-14 00:59:56.161676 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161684 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161691 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161699 | orchestrator | 2025-04-14 00:59:56.161706 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-14 00:59:56.161713 | orchestrator | Monday 14 April 2025 00:52:38 +0000 (0:00:00.447) 0:06:25.594 ********** 2025-04-14 00:59:56.161721 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161728 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161736 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161743 | orchestrator | 2025-04-14 00:59:56.161751 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-14 00:59:56.161758 | orchestrator | Monday 14 April 2025 00:52:39 +0000 (0:00:00.789) 0:06:26.383 ********** 2025-04-14 00:59:56.161771 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161779 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161787 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161794 | orchestrator | 2025-04-14 00:59:56.161801 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-14 00:59:56.161809 | orchestrator | Monday 14 April 2025 00:52:39 +0000 (0:00:00.392) 0:06:26.777 ********** 2025-04-14 00:59:56.161816 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161823 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161832 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161840 | orchestrator | 2025-04-14 00:59:56.161849 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-14 00:59:56.161856 | orchestrator | Monday 14 April 2025 00:52:40 +0000 (0:00:00.377) 0:06:27.154 ********** 2025-04-14 00:59:56.161893 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.161911 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.161919 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.161927 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.161935 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.161943 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.161951 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.161958 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.161966 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.161973 | orchestrator | 2025-04-14 00:59:56.161980 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-14 00:59:56.161987 | orchestrator | Monday 14 April 2025 00:52:40 +0000 (0:00:00.455) 0:06:27.610 ********** 2025-04-14 00:59:56.161994 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-14 00:59:56.162002 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-14 00:59:56.162010 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162058 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-14 00:59:56.162067 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-14 00:59:56.162075 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162084 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-14 00:59:56.162093 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-14 00:59:56.162102 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162111 | orchestrator | 2025-04-14 00:59:56.162120 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-14 00:59:56.162129 | orchestrator | Monday 14 April 2025 00:52:41 +0000 (0:00:00.778) 0:06:28.388 ********** 2025-04-14 00:59:56.162138 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162148 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162157 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162167 | orchestrator | 2025-04-14 00:59:56.162175 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-14 00:59:56.162190 | orchestrator | Monday 14 April 2025 00:52:41 +0000 (0:00:00.364) 0:06:28.752 ********** 2025-04-14 00:59:56.162199 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162208 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162216 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162225 | orchestrator | 2025-04-14 00:59:56.162233 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.162243 | orchestrator | Monday 14 April 2025 00:52:42 +0000 (0:00:00.362) 0:06:29.115 ********** 2025-04-14 00:59:56.162252 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162263 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162272 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162280 | orchestrator | 2025-04-14 00:59:56.162289 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.162298 | orchestrator | Monday 14 April 2025 00:52:42 +0000 (0:00:00.348) 0:06:29.464 ********** 2025-04-14 00:59:56.162307 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162316 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162325 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162334 | orchestrator | 2025-04-14 00:59:56.162341 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.162349 | orchestrator | Monday 14 April 2025 00:52:43 +0000 (0:00:00.631) 0:06:30.096 ********** 2025-04-14 00:59:56.162358 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162367 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162376 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162385 | orchestrator | 2025-04-14 00:59:56.162393 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.162408 | orchestrator | Monday 14 April 2025 00:52:43 +0000 (0:00:00.355) 0:06:30.451 ********** 2025-04-14 00:59:56.162417 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162426 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162435 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162444 | orchestrator | 2025-04-14 00:59:56.162453 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.162462 | orchestrator | Monday 14 April 2025 00:52:43 +0000 (0:00:00.359) 0:06:30.811 ********** 2025-04-14 00:59:56.162470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.162479 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.162488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.162496 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162505 | orchestrator | 2025-04-14 00:59:56.162515 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.162523 | orchestrator | Monday 14 April 2025 00:52:44 +0000 (0:00:00.425) 0:06:31.236 ********** 2025-04-14 00:59:56.162531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.162538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.162547 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.162555 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162563 | orchestrator | 2025-04-14 00:59:56.162572 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.162580 | orchestrator | Monday 14 April 2025 00:52:44 +0000 (0:00:00.464) 0:06:31.701 ********** 2025-04-14 00:59:56.162588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.162596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.162604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.162612 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162617 | orchestrator | 2025-04-14 00:59:56.162622 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.162665 | orchestrator | Monday 14 April 2025 00:52:45 +0000 (0:00:00.796) 0:06:32.497 ********** 2025-04-14 00:59:56.162671 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162676 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162681 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162686 | orchestrator | 2025-04-14 00:59:56.162691 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.162696 | orchestrator | Monday 14 April 2025 00:52:46 +0000 (0:00:00.704) 0:06:33.202 ********** 2025-04-14 00:59:56.162701 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.162706 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162711 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.162715 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162720 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.162725 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162730 | orchestrator | 2025-04-14 00:59:56.162735 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.162740 | orchestrator | Monday 14 April 2025 00:52:47 +0000 (0:00:00.708) 0:06:33.911 ********** 2025-04-14 00:59:56.162744 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162749 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162754 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162759 | orchestrator | 2025-04-14 00:59:56.162764 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.162769 | orchestrator | Monday 14 April 2025 00:52:47 +0000 (0:00:00.517) 0:06:34.428 ********** 2025-04-14 00:59:56.162773 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162778 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162792 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162797 | orchestrator | 2025-04-14 00:59:56.162802 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.162807 | orchestrator | Monday 14 April 2025 00:52:48 +0000 (0:00:00.684) 0:06:35.113 ********** 2025-04-14 00:59:56.162812 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.162816 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162821 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.162826 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162831 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.162836 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162841 | orchestrator | 2025-04-14 00:59:56.162846 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.162853 | orchestrator | Monday 14 April 2025 00:52:48 +0000 (0:00:00.585) 0:06:35.699 ********** 2025-04-14 00:59:56.162860 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162867 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162875 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162880 | orchestrator | 2025-04-14 00:59:56.162885 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.162890 | orchestrator | Monday 14 April 2025 00:52:49 +0000 (0:00:00.387) 0:06:36.086 ********** 2025-04-14 00:59:56.162895 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.162900 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.162908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.162915 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162920 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-14 00:59:56.162925 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-14 00:59:56.162930 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-14 00:59:56.162935 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162940 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-14 00:59:56.162944 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-14 00:59:56.162949 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-14 00:59:56.162954 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162959 | orchestrator | 2025-04-14 00:59:56.162964 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-14 00:59:56.162969 | orchestrator | Monday 14 April 2025 00:52:50 +0000 (0:00:00.953) 0:06:37.039 ********** 2025-04-14 00:59:56.162973 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.162978 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.162983 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.162988 | orchestrator | 2025-04-14 00:59:56.162993 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-14 00:59:56.162998 | orchestrator | Monday 14 April 2025 00:52:50 +0000 (0:00:00.607) 0:06:37.647 ********** 2025-04-14 00:59:56.163002 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163007 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163012 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.163017 | orchestrator | 2025-04-14 00:59:56.163025 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-14 00:59:56.163030 | orchestrator | Monday 14 April 2025 00:52:51 +0000 (0:00:00.901) 0:06:38.549 ********** 2025-04-14 00:59:56.163051 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163056 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163061 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.163066 | orchestrator | 2025-04-14 00:59:56.163071 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-14 00:59:56.163076 | orchestrator | Monday 14 April 2025 00:52:52 +0000 (0:00:00.604) 0:06:39.154 ********** 2025-04-14 00:59:56.163084 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163089 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163094 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.163099 | orchestrator | 2025-04-14 00:59:56.163103 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-04-14 00:59:56.163108 | orchestrator | Monday 14 April 2025 00:52:53 +0000 (0:00:00.979) 0:06:40.133 ********** 2025-04-14 00:59:56.163113 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 00:59:56.163134 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 00:59:56.163140 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 00:59:56.163145 | orchestrator | 2025-04-14 00:59:56.163150 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-04-14 00:59:56.163155 | orchestrator | Monday 14 April 2025 00:52:54 +0000 (0:00:00.811) 0:06:40.945 ********** 2025-04-14 00:59:56.163160 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.163165 | orchestrator | 2025-04-14 00:59:56.163170 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-04-14 00:59:56.163175 | orchestrator | Monday 14 April 2025 00:52:54 +0000 (0:00:00.574) 0:06:41.520 ********** 2025-04-14 00:59:56.163180 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.163184 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.163189 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.163194 | orchestrator | 2025-04-14 00:59:56.163199 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-04-14 00:59:56.163204 | orchestrator | Monday 14 April 2025 00:52:55 +0000 (0:00:00.733) 0:06:42.253 ********** 2025-04-14 00:59:56.163209 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163214 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163222 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.163227 | orchestrator | 2025-04-14 00:59:56.163232 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-04-14 00:59:56.163237 | orchestrator | Monday 14 April 2025 00:52:56 +0000 (0:00:00.607) 0:06:42.860 ********** 2025-04-14 00:59:56.163242 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-14 00:59:56.163247 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-14 00:59:56.163252 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-14 00:59:56.163257 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-04-14 00:59:56.163262 | orchestrator | 2025-04-14 00:59:56.163267 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-04-14 00:59:56.163272 | orchestrator | Monday 14 April 2025 00:53:04 +0000 (0:00:08.020) 0:06:50.880 ********** 2025-04-14 00:59:56.163277 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.163282 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.163286 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.163291 | orchestrator | 2025-04-14 00:59:56.163296 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-04-14 00:59:56.163301 | orchestrator | Monday 14 April 2025 00:53:04 +0000 (0:00:00.618) 0:06:51.499 ********** 2025-04-14 00:59:56.163306 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-14 00:59:56.163311 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-14 00:59:56.163316 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-14 00:59:56.163321 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-14 00:59:56.163326 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 00:59:56.163331 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 00:59:56.163335 | orchestrator | 2025-04-14 00:59:56.163340 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-04-14 00:59:56.163345 | orchestrator | Monday 14 April 2025 00:53:06 +0000 (0:00:01.970) 0:06:53.470 ********** 2025-04-14 00:59:56.163353 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-14 00:59:56.163358 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-14 00:59:56.163363 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-14 00:59:56.163368 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-14 00:59:56.163373 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-04-14 00:59:56.163378 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-04-14 00:59:56.163383 | orchestrator | 2025-04-14 00:59:56.163388 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-04-14 00:59:56.163392 | orchestrator | Monday 14 April 2025 00:53:07 +0000 (0:00:01.250) 0:06:54.720 ********** 2025-04-14 00:59:56.163397 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.163402 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.163407 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.163412 | orchestrator | 2025-04-14 00:59:56.163417 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-04-14 00:59:56.163422 | orchestrator | Monday 14 April 2025 00:53:08 +0000 (0:00:01.071) 0:06:55.792 ********** 2025-04-14 00:59:56.163426 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163431 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163436 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.163441 | orchestrator | 2025-04-14 00:59:56.163446 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-04-14 00:59:56.163451 | orchestrator | Monday 14 April 2025 00:53:09 +0000 (0:00:00.440) 0:06:56.233 ********** 2025-04-14 00:59:56.163456 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163461 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163465 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.163470 | orchestrator | 2025-04-14 00:59:56.163475 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-04-14 00:59:56.163480 | orchestrator | Monday 14 April 2025 00:53:09 +0000 (0:00:00.320) 0:06:56.553 ********** 2025-04-14 00:59:56.163485 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.163490 | orchestrator | 2025-04-14 00:59:56.163497 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-04-14 00:59:56.163503 | orchestrator | Monday 14 April 2025 00:53:10 +0000 (0:00:00.851) 0:06:57.404 ********** 2025-04-14 00:59:56.163509 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163518 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163523 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.163527 | orchestrator | 2025-04-14 00:59:56.163532 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-04-14 00:59:56.163549 | orchestrator | Monday 14 April 2025 00:53:11 +0000 (0:00:00.483) 0:06:57.887 ********** 2025-04-14 00:59:56.163555 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163560 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163565 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.163569 | orchestrator | 2025-04-14 00:59:56.163574 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-04-14 00:59:56.163579 | orchestrator | Monday 14 April 2025 00:53:11 +0000 (0:00:00.463) 0:06:58.351 ********** 2025-04-14 00:59:56.163584 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.163589 | orchestrator | 2025-04-14 00:59:56.163594 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-04-14 00:59:56.163598 | orchestrator | Monday 14 April 2025 00:53:12 +0000 (0:00:00.903) 0:06:59.254 ********** 2025-04-14 00:59:56.163603 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.163608 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.163613 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.163617 | orchestrator | 2025-04-14 00:59:56.163622 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-04-14 00:59:56.163631 | orchestrator | Monday 14 April 2025 00:53:13 +0000 (0:00:01.220) 0:07:00.475 ********** 2025-04-14 00:59:56.163635 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.163640 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.163645 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.163650 | orchestrator | 2025-04-14 00:59:56.163655 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-04-14 00:59:56.163659 | orchestrator | Monday 14 April 2025 00:53:14 +0000 (0:00:01.145) 0:07:01.620 ********** 2025-04-14 00:59:56.163664 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.163669 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.163674 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.163678 | orchestrator | 2025-04-14 00:59:56.163683 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-04-14 00:59:56.163688 | orchestrator | Monday 14 April 2025 00:53:16 +0000 (0:00:01.980) 0:07:03.601 ********** 2025-04-14 00:59:56.163693 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.163698 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.163702 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.163707 | orchestrator | 2025-04-14 00:59:56.163712 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-04-14 00:59:56.163717 | orchestrator | Monday 14 April 2025 00:53:18 +0000 (0:00:01.917) 0:07:05.518 ********** 2025-04-14 00:59:56.163721 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.163726 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.163731 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-04-14 00:59:56.163736 | orchestrator | 2025-04-14 00:59:56.163741 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-04-14 00:59:56.163746 | orchestrator | Monday 14 April 2025 00:53:19 +0000 (0:00:00.578) 0:07:06.097 ********** 2025-04-14 00:59:56.163751 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-04-14 00:59:56.163756 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-04-14 00:59:56.163760 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-14 00:59:56.163765 | orchestrator | 2025-04-14 00:59:56.163770 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-04-14 00:59:56.163775 | orchestrator | Monday 14 April 2025 00:53:32 +0000 (0:00:13.745) 0:07:19.842 ********** 2025-04-14 00:59:56.163780 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-04-14 00:59:56.163785 | orchestrator | 2025-04-14 00:59:56.163789 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-04-14 00:59:56.163794 | orchestrator | Monday 14 April 2025 00:53:34 +0000 (0:00:01.428) 0:07:21.271 ********** 2025-04-14 00:59:56.163799 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.163804 | orchestrator | 2025-04-14 00:59:56.163809 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-04-14 00:59:56.163813 | orchestrator | Monday 14 April 2025 00:53:34 +0000 (0:00:00.469) 0:07:21.741 ********** 2025-04-14 00:59:56.163818 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.163823 | orchestrator | 2025-04-14 00:59:56.163828 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-04-14 00:59:56.163832 | orchestrator | Monday 14 April 2025 00:53:35 +0000 (0:00:00.303) 0:07:22.044 ********** 2025-04-14 00:59:56.163837 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-04-14 00:59:56.163842 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-04-14 00:59:56.163847 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-04-14 00:59:56.163851 | orchestrator | 2025-04-14 00:59:56.163856 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-04-14 00:59:56.163866 | orchestrator | Monday 14 April 2025 00:53:42 +0000 (0:00:06.909) 0:07:28.953 ********** 2025-04-14 00:59:56.163871 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-04-14 00:59:56.163876 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-04-14 00:59:56.163881 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-04-14 00:59:56.163886 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-04-14 00:59:56.163890 | orchestrator | 2025-04-14 00:59:56.163895 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-14 00:59:56.163900 | orchestrator | Monday 14 April 2025 00:53:47 +0000 (0:00:05.757) 0:07:34.711 ********** 2025-04-14 00:59:56.163905 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.163910 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.163914 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.163919 | orchestrator | 2025-04-14 00:59:56.163936 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-04-14 00:59:56.163942 | orchestrator | Monday 14 April 2025 00:53:48 +0000 (0:00:00.944) 0:07:35.656 ********** 2025-04-14 00:59:56.163946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 00:59:56.163951 | orchestrator | 2025-04-14 00:59:56.163956 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-04-14 00:59:56.163961 | orchestrator | Monday 14 April 2025 00:53:49 +0000 (0:00:00.584) 0:07:36.240 ********** 2025-04-14 00:59:56.163966 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.163971 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.163975 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.163980 | orchestrator | 2025-04-14 00:59:56.163985 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-04-14 00:59:56.163990 | orchestrator | Monday 14 April 2025 00:53:49 +0000 (0:00:00.349) 0:07:36.589 ********** 2025-04-14 00:59:56.163995 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.163999 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.164004 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.164009 | orchestrator | 2025-04-14 00:59:56.164014 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-04-14 00:59:56.164019 | orchestrator | Monday 14 April 2025 00:53:51 +0000 (0:00:01.286) 0:07:37.875 ********** 2025-04-14 00:59:56.164023 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 00:59:56.164028 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 00:59:56.164066 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 00:59:56.164073 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.164078 | orchestrator | 2025-04-14 00:59:56.164083 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-04-14 00:59:56.164087 | orchestrator | Monday 14 April 2025 00:53:51 +0000 (0:00:00.696) 0:07:38.572 ********** 2025-04-14 00:59:56.164092 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.164097 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.164102 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.164107 | orchestrator | 2025-04-14 00:59:56.164112 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-14 00:59:56.164117 | orchestrator | Monday 14 April 2025 00:53:52 +0000 (0:00:00.360) 0:07:38.932 ********** 2025-04-14 00:59:56.164121 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.164129 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.164134 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.164139 | orchestrator | 2025-04-14 00:59:56.164144 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-04-14 00:59:56.164149 | orchestrator | 2025-04-14 00:59:56.164153 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-14 00:59:56.164158 | orchestrator | Monday 14 April 2025 00:53:54 +0000 (0:00:02.401) 0:07:41.333 ********** 2025-04-14 00:59:56.164167 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.164172 | orchestrator | 2025-04-14 00:59:56.164176 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-14 00:59:56.164181 | orchestrator | Monday 14 April 2025 00:53:55 +0000 (0:00:00.597) 0:07:41.931 ********** 2025-04-14 00:59:56.164186 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164191 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164196 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164201 | orchestrator | 2025-04-14 00:59:56.164206 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-14 00:59:56.164210 | orchestrator | Monday 14 April 2025 00:53:55 +0000 (0:00:00.400) 0:07:42.332 ********** 2025-04-14 00:59:56.164215 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.164220 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.164225 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.164230 | orchestrator | 2025-04-14 00:59:56.164235 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-14 00:59:56.164239 | orchestrator | Monday 14 April 2025 00:53:56 +0000 (0:00:00.994) 0:07:43.326 ********** 2025-04-14 00:59:56.164244 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.164249 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.164254 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.164259 | orchestrator | 2025-04-14 00:59:56.164263 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-14 00:59:56.164268 | orchestrator | Monday 14 April 2025 00:53:57 +0000 (0:00:00.770) 0:07:44.096 ********** 2025-04-14 00:59:56.164273 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.164278 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.164282 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.164287 | orchestrator | 2025-04-14 00:59:56.164292 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-14 00:59:56.164297 | orchestrator | Monday 14 April 2025 00:53:57 +0000 (0:00:00.749) 0:07:44.845 ********** 2025-04-14 00:59:56.164302 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164307 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164311 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164316 | orchestrator | 2025-04-14 00:59:56.164321 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-14 00:59:56.164326 | orchestrator | Monday 14 April 2025 00:53:58 +0000 (0:00:00.323) 0:07:45.168 ********** 2025-04-14 00:59:56.164331 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164335 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164340 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164345 | orchestrator | 2025-04-14 00:59:56.164352 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-14 00:59:56.164357 | orchestrator | Monday 14 April 2025 00:53:58 +0000 (0:00:00.599) 0:07:45.768 ********** 2025-04-14 00:59:56.164362 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164367 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164372 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164377 | orchestrator | 2025-04-14 00:59:56.164381 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-14 00:59:56.164399 | orchestrator | Monday 14 April 2025 00:53:59 +0000 (0:00:00.346) 0:07:46.115 ********** 2025-04-14 00:59:56.164405 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164410 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164415 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164419 | orchestrator | 2025-04-14 00:59:56.164424 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-14 00:59:56.164429 | orchestrator | Monday 14 April 2025 00:53:59 +0000 (0:00:00.328) 0:07:46.443 ********** 2025-04-14 00:59:56.164434 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164439 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164447 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164452 | orchestrator | 2025-04-14 00:59:56.164457 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-14 00:59:56.164461 | orchestrator | Monday 14 April 2025 00:53:59 +0000 (0:00:00.324) 0:07:46.768 ********** 2025-04-14 00:59:56.164466 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164471 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164476 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164481 | orchestrator | 2025-04-14 00:59:56.164486 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-14 00:59:56.164490 | orchestrator | Monday 14 April 2025 00:54:00 +0000 (0:00:00.690) 0:07:47.458 ********** 2025-04-14 00:59:56.164495 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.164500 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.164505 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.164510 | orchestrator | 2025-04-14 00:59:56.164515 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-14 00:59:56.164520 | orchestrator | Monday 14 April 2025 00:54:01 +0000 (0:00:00.687) 0:07:48.146 ********** 2025-04-14 00:59:56.164524 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164529 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164534 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164539 | orchestrator | 2025-04-14 00:59:56.164543 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-14 00:59:56.164548 | orchestrator | Monday 14 April 2025 00:54:01 +0000 (0:00:00.354) 0:07:48.500 ********** 2025-04-14 00:59:56.164553 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164558 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164563 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164568 | orchestrator | 2025-04-14 00:59:56.164572 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-14 00:59:56.164577 | orchestrator | Monday 14 April 2025 00:54:01 +0000 (0:00:00.308) 0:07:48.808 ********** 2025-04-14 00:59:56.164582 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.164587 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.164592 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.164597 | orchestrator | 2025-04-14 00:59:56.164601 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-14 00:59:56.164606 | orchestrator | Monday 14 April 2025 00:54:02 +0000 (0:00:00.625) 0:07:49.434 ********** 2025-04-14 00:59:56.164611 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.164616 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.164620 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.164625 | orchestrator | 2025-04-14 00:59:56.164630 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-14 00:59:56.164635 | orchestrator | Monday 14 April 2025 00:54:02 +0000 (0:00:00.343) 0:07:49.777 ********** 2025-04-14 00:59:56.164640 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.164645 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.164650 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.164654 | orchestrator | 2025-04-14 00:59:56.164659 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-14 00:59:56.164664 | orchestrator | Monday 14 April 2025 00:54:03 +0000 (0:00:00.346) 0:07:50.124 ********** 2025-04-14 00:59:56.164669 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164676 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164681 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164686 | orchestrator | 2025-04-14 00:59:56.164691 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-14 00:59:56.164696 | orchestrator | Monday 14 April 2025 00:54:03 +0000 (0:00:00.337) 0:07:50.461 ********** 2025-04-14 00:59:56.164701 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164705 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164710 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164720 | orchestrator | 2025-04-14 00:59:56.164725 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-14 00:59:56.164730 | orchestrator | Monday 14 April 2025 00:54:04 +0000 (0:00:00.615) 0:07:51.076 ********** 2025-04-14 00:59:56.164734 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164739 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164744 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164749 | orchestrator | 2025-04-14 00:59:56.164754 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-14 00:59:56.164758 | orchestrator | Monday 14 April 2025 00:54:04 +0000 (0:00:00.335) 0:07:51.411 ********** 2025-04-14 00:59:56.164763 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.164768 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.164773 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.164778 | orchestrator | 2025-04-14 00:59:56.164783 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-14 00:59:56.164787 | orchestrator | Monday 14 April 2025 00:54:04 +0000 (0:00:00.332) 0:07:51.744 ********** 2025-04-14 00:59:56.164792 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164797 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164802 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164807 | orchestrator | 2025-04-14 00:59:56.164811 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-14 00:59:56.164816 | orchestrator | Monday 14 April 2025 00:54:05 +0000 (0:00:00.382) 0:07:52.127 ********** 2025-04-14 00:59:56.164821 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164826 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164831 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164836 | orchestrator | 2025-04-14 00:59:56.164843 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-14 00:59:56.164859 | orchestrator | Monday 14 April 2025 00:54:05 +0000 (0:00:00.643) 0:07:52.771 ********** 2025-04-14 00:59:56.164865 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164870 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164875 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164880 | orchestrator | 2025-04-14 00:59:56.164884 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-14 00:59:56.164889 | orchestrator | Monday 14 April 2025 00:54:06 +0000 (0:00:00.379) 0:07:53.151 ********** 2025-04-14 00:59:56.164894 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164899 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164904 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164909 | orchestrator | 2025-04-14 00:59:56.164913 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-14 00:59:56.164918 | orchestrator | Monday 14 April 2025 00:54:06 +0000 (0:00:00.373) 0:07:53.525 ********** 2025-04-14 00:59:56.164923 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164928 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164933 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164937 | orchestrator | 2025-04-14 00:59:56.164942 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-14 00:59:56.164947 | orchestrator | Monday 14 April 2025 00:54:07 +0000 (0:00:00.389) 0:07:53.914 ********** 2025-04-14 00:59:56.164952 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164957 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.164962 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.164966 | orchestrator | 2025-04-14 00:59:56.164971 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-14 00:59:56.164976 | orchestrator | Monday 14 April 2025 00:54:07 +0000 (0:00:00.623) 0:07:54.537 ********** 2025-04-14 00:59:56.164984 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.164992 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165000 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165007 | orchestrator | 2025-04-14 00:59:56.165015 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-14 00:59:56.165028 | orchestrator | Monday 14 April 2025 00:54:08 +0000 (0:00:00.375) 0:07:54.913 ********** 2025-04-14 00:59:56.165047 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165055 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165063 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165071 | orchestrator | 2025-04-14 00:59:56.165078 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-14 00:59:56.165085 | orchestrator | Monday 14 April 2025 00:54:08 +0000 (0:00:00.333) 0:07:55.247 ********** 2025-04-14 00:59:56.165093 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165098 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165103 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165108 | orchestrator | 2025-04-14 00:59:56.165113 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-14 00:59:56.165118 | orchestrator | Monday 14 April 2025 00:54:08 +0000 (0:00:00.395) 0:07:55.642 ********** 2025-04-14 00:59:56.165122 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165127 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165132 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165137 | orchestrator | 2025-04-14 00:59:56.165142 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-14 00:59:56.165146 | orchestrator | Monday 14 April 2025 00:54:09 +0000 (0:00:00.789) 0:07:56.431 ********** 2025-04-14 00:59:56.165151 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165156 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165161 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165166 | orchestrator | 2025-04-14 00:59:56.165171 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-14 00:59:56.165175 | orchestrator | Monday 14 April 2025 00:54:09 +0000 (0:00:00.358) 0:07:56.790 ********** 2025-04-14 00:59:56.165180 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165185 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165190 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165194 | orchestrator | 2025-04-14 00:59:56.165199 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-14 00:59:56.165204 | orchestrator | Monday 14 April 2025 00:54:10 +0000 (0:00:00.363) 0:07:57.153 ********** 2025-04-14 00:59:56.165209 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.165214 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.165219 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165224 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.165229 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.165233 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165238 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.165243 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.165248 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165253 | orchestrator | 2025-04-14 00:59:56.165257 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-14 00:59:56.165262 | orchestrator | Monday 14 April 2025 00:54:10 +0000 (0:00:00.438) 0:07:57.591 ********** 2025-04-14 00:59:56.165267 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-14 00:59:56.165275 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-14 00:59:56.165280 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165287 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-14 00:59:56.165292 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-14 00:59:56.165297 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165302 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-14 00:59:56.165310 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-14 00:59:56.165315 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165320 | orchestrator | 2025-04-14 00:59:56.165324 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-14 00:59:56.165344 | orchestrator | Monday 14 April 2025 00:54:11 +0000 (0:00:00.765) 0:07:58.357 ********** 2025-04-14 00:59:56.165350 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165355 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165360 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165365 | orchestrator | 2025-04-14 00:59:56.165369 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-14 00:59:56.165374 | orchestrator | Monday 14 April 2025 00:54:11 +0000 (0:00:00.484) 0:07:58.841 ********** 2025-04-14 00:59:56.165379 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165384 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165389 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165393 | orchestrator | 2025-04-14 00:59:56.165398 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.165403 | orchestrator | Monday 14 April 2025 00:54:12 +0000 (0:00:00.458) 0:07:59.300 ********** 2025-04-14 00:59:56.165408 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165413 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165418 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165423 | orchestrator | 2025-04-14 00:59:56.165427 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.165432 | orchestrator | Monday 14 April 2025 00:54:12 +0000 (0:00:00.533) 0:07:59.833 ********** 2025-04-14 00:59:56.165437 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165442 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165446 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165451 | orchestrator | 2025-04-14 00:59:56.165456 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.165461 | orchestrator | Monday 14 April 2025 00:54:13 +0000 (0:00:00.628) 0:08:00.462 ********** 2025-04-14 00:59:56.165466 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165470 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165475 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165480 | orchestrator | 2025-04-14 00:59:56.165485 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.165489 | orchestrator | Monday 14 April 2025 00:54:13 +0000 (0:00:00.328) 0:08:00.791 ********** 2025-04-14 00:59:56.165494 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165499 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165504 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165509 | orchestrator | 2025-04-14 00:59:56.165513 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.165521 | orchestrator | Monday 14 April 2025 00:54:14 +0000 (0:00:00.349) 0:08:01.140 ********** 2025-04-14 00:59:56.165526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.165531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.165535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.165540 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165545 | orchestrator | 2025-04-14 00:59:56.165550 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.165555 | orchestrator | Monday 14 April 2025 00:54:14 +0000 (0:00:00.496) 0:08:01.637 ********** 2025-04-14 00:59:56.165560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.165565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.165569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.165574 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165583 | orchestrator | 2025-04-14 00:59:56.165588 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.165592 | orchestrator | Monday 14 April 2025 00:54:15 +0000 (0:00:00.522) 0:08:02.159 ********** 2025-04-14 00:59:56.165597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.165602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.165607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.165612 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165617 | orchestrator | 2025-04-14 00:59:56.165622 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.165626 | orchestrator | Monday 14 April 2025 00:54:16 +0000 (0:00:00.757) 0:08:02.917 ********** 2025-04-14 00:59:56.165631 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165636 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165641 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165646 | orchestrator | 2025-04-14 00:59:56.165651 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.165655 | orchestrator | Monday 14 April 2025 00:54:16 +0000 (0:00:00.596) 0:08:03.513 ********** 2025-04-14 00:59:56.165660 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.165665 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165670 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.165675 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165680 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.165684 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165689 | orchestrator | 2025-04-14 00:59:56.165694 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.165699 | orchestrator | Monday 14 April 2025 00:54:17 +0000 (0:00:00.629) 0:08:04.143 ********** 2025-04-14 00:59:56.165704 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165709 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165714 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165719 | orchestrator | 2025-04-14 00:59:56.165723 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.165728 | orchestrator | Monday 14 April 2025 00:54:17 +0000 (0:00:00.340) 0:08:04.483 ********** 2025-04-14 00:59:56.165733 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165738 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165743 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165748 | orchestrator | 2025-04-14 00:59:56.165764 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.165769 | orchestrator | Monday 14 April 2025 00:54:17 +0000 (0:00:00.361) 0:08:04.845 ********** 2025-04-14 00:59:56.165774 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.165779 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165784 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.165789 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165794 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.165799 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165804 | orchestrator | 2025-04-14 00:59:56.165809 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.165813 | orchestrator | Monday 14 April 2025 00:54:18 +0000 (0:00:00.813) 0:08:05.658 ********** 2025-04-14 00:59:56.165818 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.165823 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165828 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.165833 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165838 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.165846 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165852 | orchestrator | 2025-04-14 00:59:56.165861 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.165869 | orchestrator | Monday 14 April 2025 00:54:19 +0000 (0:00:00.453) 0:08:06.112 ********** 2025-04-14 00:59:56.165877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.165885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.165893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.165902 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 00:59:56.165908 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 00:59:56.165913 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 00:59:56.165918 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165923 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165927 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 00:59:56.165932 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 00:59:56.165937 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 00:59:56.165942 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165947 | orchestrator | 2025-04-14 00:59:56.165951 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-14 00:59:56.165956 | orchestrator | Monday 14 April 2025 00:54:19 +0000 (0:00:00.628) 0:08:06.741 ********** 2025-04-14 00:59:56.165961 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.165966 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.165971 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.165975 | orchestrator | 2025-04-14 00:59:56.165980 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-14 00:59:56.165985 | orchestrator | Monday 14 April 2025 00:54:20 +0000 (0:00:00.930) 0:08:07.672 ********** 2025-04-14 00:59:56.165990 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.165995 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.166000 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-14 00:59:56.166004 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.166009 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-14 00:59:56.166031 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.166053 | orchestrator | 2025-04-14 00:59:56.166058 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-14 00:59:56.166063 | orchestrator | Monday 14 April 2025 00:54:21 +0000 (0:00:00.606) 0:08:08.278 ********** 2025-04-14 00:59:56.166068 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.166073 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.166084 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.166093 | orchestrator | 2025-04-14 00:59:56.166100 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-14 00:59:56.166108 | orchestrator | Monday 14 April 2025 00:54:22 +0000 (0:00:00.834) 0:08:09.112 ********** 2025-04-14 00:59:56.166115 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.166122 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.166130 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.166138 | orchestrator | 2025-04-14 00:59:56.166146 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-04-14 00:59:56.166153 | orchestrator | Monday 14 April 2025 00:54:22 +0000 (0:00:00.572) 0:08:09.686 ********** 2025-04-14 00:59:56.166161 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.166169 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.166173 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.166178 | orchestrator | 2025-04-14 00:59:56.166183 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-04-14 00:59:56.166196 | orchestrator | Monday 14 April 2025 00:54:23 +0000 (0:00:00.659) 0:08:10.345 ********** 2025-04-14 00:59:56.166207 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-14 00:59:56.166215 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 00:59:56.166223 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 00:59:56.166231 | orchestrator | 2025-04-14 00:59:56.166239 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-04-14 00:59:56.166248 | orchestrator | Monday 14 April 2025 00:54:24 +0000 (0:00:00.753) 0:08:11.098 ********** 2025-04-14 00:59:56.166270 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.166277 | orchestrator | 2025-04-14 00:59:56.166286 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-04-14 00:59:56.166294 | orchestrator | Monday 14 April 2025 00:54:24 +0000 (0:00:00.584) 0:08:11.683 ********** 2025-04-14 00:59:56.166303 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.166312 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.166321 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.166334 | orchestrator | 2025-04-14 00:59:56.166343 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-04-14 00:59:56.166353 | orchestrator | Monday 14 April 2025 00:54:25 +0000 (0:00:00.585) 0:08:12.268 ********** 2025-04-14 00:59:56.166358 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.166363 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.166372 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.166379 | orchestrator | 2025-04-14 00:59:56.166387 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-04-14 00:59:56.166395 | orchestrator | Monday 14 April 2025 00:54:25 +0000 (0:00:00.335) 0:08:12.604 ********** 2025-04-14 00:59:56.166403 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.166411 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.166419 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.166427 | orchestrator | 2025-04-14 00:59:56.166435 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-04-14 00:59:56.166443 | orchestrator | Monday 14 April 2025 00:54:26 +0000 (0:00:00.336) 0:08:12.941 ********** 2025-04-14 00:59:56.166450 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.166458 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.166466 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.166474 | orchestrator | 2025-04-14 00:59:56.166482 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-04-14 00:59:56.166490 | orchestrator | Monday 14 April 2025 00:54:26 +0000 (0:00:00.312) 0:08:13.253 ********** 2025-04-14 00:59:56.166497 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.166505 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.166513 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.166521 | orchestrator | 2025-04-14 00:59:56.166529 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-04-14 00:59:56.166537 | orchestrator | Monday 14 April 2025 00:54:27 +0000 (0:00:00.943) 0:08:14.196 ********** 2025-04-14 00:59:56.166544 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.166552 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.166560 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.166568 | orchestrator | 2025-04-14 00:59:56.166576 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-04-14 00:59:56.166584 | orchestrator | Monday 14 April 2025 00:54:27 +0000 (0:00:00.354) 0:08:14.551 ********** 2025-04-14 00:59:56.166592 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-14 00:59:56.166603 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-14 00:59:56.166616 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-04-14 00:59:56.166624 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-14 00:59:56.166632 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-14 00:59:56.166640 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-04-14 00:59:56.166648 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-14 00:59:56.166656 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-14 00:59:56.166664 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-04-14 00:59:56.166672 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-14 00:59:56.166680 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-14 00:59:56.166688 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-04-14 00:59:56.166696 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-14 00:59:56.166704 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-14 00:59:56.166711 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-04-14 00:59:56.166719 | orchestrator | 2025-04-14 00:59:56.166727 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-04-14 00:59:56.166734 | orchestrator | Monday 14 April 2025 00:54:29 +0000 (0:00:02.149) 0:08:16.700 ********** 2025-04-14 00:59:56.166741 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.166749 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.166756 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.166764 | orchestrator | 2025-04-14 00:59:56.166772 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-04-14 00:59:56.166779 | orchestrator | Monday 14 April 2025 00:54:30 +0000 (0:00:00.312) 0:08:17.013 ********** 2025-04-14 00:59:56.166787 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.166795 | orchestrator | 2025-04-14 00:59:56.166809 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-04-14 00:59:56.166817 | orchestrator | Monday 14 April 2025 00:54:31 +0000 (0:00:00.881) 0:08:17.894 ********** 2025-04-14 00:59:56.166826 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-14 00:59:56.166856 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-14 00:59:56.166865 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-04-14 00:59:56.166873 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-04-14 00:59:56.166881 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-04-14 00:59:56.166889 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-04-14 00:59:56.166897 | orchestrator | 2025-04-14 00:59:56.166905 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-04-14 00:59:56.166913 | orchestrator | Monday 14 April 2025 00:54:32 +0000 (0:00:01.049) 0:08:18.943 ********** 2025-04-14 00:59:56.166921 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 00:59:56.166929 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.166937 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-14 00:59:56.166945 | orchestrator | 2025-04-14 00:59:56.166953 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-04-14 00:59:56.166961 | orchestrator | Monday 14 April 2025 00:54:33 +0000 (0:00:01.838) 0:08:20.782 ********** 2025-04-14 00:59:56.166969 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-14 00:59:56.166982 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.166990 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.167002 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-14 00:59:56.167010 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-14 00:59:56.167018 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.167025 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-14 00:59:56.167066 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-14 00:59:56.167078 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.167087 | orchestrator | 2025-04-14 00:59:56.167096 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-04-14 00:59:56.167105 | orchestrator | Monday 14 April 2025 00:54:35 +0000 (0:00:01.522) 0:08:22.305 ********** 2025-04-14 00:59:56.167114 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-14 00:59:56.167122 | orchestrator | 2025-04-14 00:59:56.167131 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-04-14 00:59:56.167139 | orchestrator | Monday 14 April 2025 00:54:37 +0000 (0:00:02.538) 0:08:24.844 ********** 2025-04-14 00:59:56.167148 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.167156 | orchestrator | 2025-04-14 00:59:56.167164 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-04-14 00:59:56.167172 | orchestrator | Monday 14 April 2025 00:54:38 +0000 (0:00:00.812) 0:08:25.657 ********** 2025-04-14 00:59:56.167181 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.167189 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.167197 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.167205 | orchestrator | 2025-04-14 00:59:56.167213 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-04-14 00:59:56.167222 | orchestrator | Monday 14 April 2025 00:54:39 +0000 (0:00:00.382) 0:08:26.039 ********** 2025-04-14 00:59:56.167231 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.167239 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.167247 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.167256 | orchestrator | 2025-04-14 00:59:56.167265 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-04-14 00:59:56.167274 | orchestrator | Monday 14 April 2025 00:54:39 +0000 (0:00:00.442) 0:08:26.482 ********** 2025-04-14 00:59:56.167282 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.167291 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.167300 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.167308 | orchestrator | 2025-04-14 00:59:56.167317 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-04-14 00:59:56.167326 | orchestrator | Monday 14 April 2025 00:54:39 +0000 (0:00:00.352) 0:08:26.835 ********** 2025-04-14 00:59:56.167334 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.167343 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.167352 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.167361 | orchestrator | 2025-04-14 00:59:56.167369 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-04-14 00:59:56.167378 | orchestrator | Monday 14 April 2025 00:54:40 +0000 (0:00:00.626) 0:08:27.461 ********** 2025-04-14 00:59:56.167386 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.167395 | orchestrator | 2025-04-14 00:59:56.167403 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-04-14 00:59:56.167411 | orchestrator | Monday 14 April 2025 00:54:41 +0000 (0:00:00.737) 0:08:28.199 ********** 2025-04-14 00:59:56.167420 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-89320cc7-f853-5314-9a76-744a2d019bd6', 'data_vg': 'ceph-89320cc7-f853-5314-9a76-744a2d019bd6'}) 2025-04-14 00:59:56.167436 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-010b5855-d3d9-5348-85e9-2943091c3a59', 'data_vg': 'ceph-010b5855-d3d9-5348-85e9-2943091c3a59'}) 2025-04-14 00:59:56.167445 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3f558b9-064d-5710-baa4-8e41f44a2baf', 'data_vg': 'ceph-b3f558b9-064d-5710-baa4-8e41f44a2baf'}) 2025-04-14 00:59:56.167454 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-47a37963-cc76-524e-bf57-deb935e0a7e9', 'data_vg': 'ceph-47a37963-cc76-524e-bf57-deb935e0a7e9'}) 2025-04-14 00:59:56.167485 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a8cf203b-da46-5fbb-85f7-5c1db9738ebe', 'data_vg': 'ceph-a8cf203b-da46-5fbb-85f7-5c1db9738ebe'}) 2025-04-14 00:59:56.167494 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1e3b39ff-ab1d-556f-9f1e-d127c66e789a', 'data_vg': 'ceph-1e3b39ff-ab1d-556f-9f1e-d127c66e789a'}) 2025-04-14 00:59:56.167502 | orchestrator | 2025-04-14 00:59:56.167510 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-04-14 00:59:56.167518 | orchestrator | Monday 14 April 2025 00:55:21 +0000 (0:00:40.134) 0:09:08.333 ********** 2025-04-14 00:59:56.167526 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.167534 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.167542 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.167550 | orchestrator | 2025-04-14 00:59:56.167558 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-04-14 00:59:56.167566 | orchestrator | Monday 14 April 2025 00:55:21 +0000 (0:00:00.516) 0:09:08.850 ********** 2025-04-14 00:59:56.167575 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.167583 | orchestrator | 2025-04-14 00:59:56.167591 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-04-14 00:59:56.167599 | orchestrator | Monday 14 April 2025 00:55:22 +0000 (0:00:00.594) 0:09:09.444 ********** 2025-04-14 00:59:56.167607 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.167615 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.167623 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.167631 | orchestrator | 2025-04-14 00:59:56.167638 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-04-14 00:59:56.167646 | orchestrator | Monday 14 April 2025 00:55:23 +0000 (0:00:00.654) 0:09:10.098 ********** 2025-04-14 00:59:56.167654 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.167666 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.167673 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.167681 | orchestrator | 2025-04-14 00:59:56.167690 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-04-14 00:59:56.167696 | orchestrator | Monday 14 April 2025 00:55:24 +0000 (0:00:01.660) 0:09:11.759 ********** 2025-04-14 00:59:56.167701 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.167705 | orchestrator | 2025-04-14 00:59:56.167710 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-04-14 00:59:56.167715 | orchestrator | Monday 14 April 2025 00:55:25 +0000 (0:00:00.709) 0:09:12.468 ********** 2025-04-14 00:59:56.167720 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.167725 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.167729 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.167734 | orchestrator | 2025-04-14 00:59:56.167739 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-04-14 00:59:56.167746 | orchestrator | Monday 14 April 2025 00:55:26 +0000 (0:00:01.197) 0:09:13.666 ********** 2025-04-14 00:59:56.167751 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.167756 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.167761 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.167765 | orchestrator | 2025-04-14 00:59:56.167770 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-04-14 00:59:56.167780 | orchestrator | Monday 14 April 2025 00:55:28 +0000 (0:00:01.391) 0:09:15.058 ********** 2025-04-14 00:59:56.167785 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.167789 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.167794 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.167799 | orchestrator | 2025-04-14 00:59:56.167804 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-04-14 00:59:56.167809 | orchestrator | Monday 14 April 2025 00:55:29 +0000 (0:00:01.655) 0:09:16.713 ********** 2025-04-14 00:59:56.167813 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.167818 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.167823 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.167828 | orchestrator | 2025-04-14 00:59:56.167833 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-04-14 00:59:56.167838 | orchestrator | Monday 14 April 2025 00:55:30 +0000 (0:00:00.342) 0:09:17.056 ********** 2025-04-14 00:59:56.167842 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.167847 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.167852 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.167856 | orchestrator | 2025-04-14 00:59:56.167861 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-04-14 00:59:56.167866 | orchestrator | Monday 14 April 2025 00:55:30 +0000 (0:00:00.758) 0:09:17.815 ********** 2025-04-14 00:59:56.167871 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-14 00:59:56.167876 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-04-14 00:59:56.167880 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-04-14 00:59:56.167885 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-04-14 00:59:56.167890 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-04-14 00:59:56.167895 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-04-14 00:59:56.167900 | orchestrator | 2025-04-14 00:59:56.167904 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-04-14 00:59:56.167909 | orchestrator | Monday 14 April 2025 00:55:32 +0000 (0:00:01.071) 0:09:18.887 ********** 2025-04-14 00:59:56.167914 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-04-14 00:59:56.167919 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-04-14 00:59:56.167924 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-04-14 00:59:56.167928 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-04-14 00:59:56.167933 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-04-14 00:59:56.167938 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-04-14 00:59:56.167943 | orchestrator | 2025-04-14 00:59:56.167947 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-04-14 00:59:56.167968 | orchestrator | Monday 14 April 2025 00:55:35 +0000 (0:00:03.428) 0:09:22.315 ********** 2025-04-14 00:59:56.167973 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.167978 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.167983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-14 00:59:56.167988 | orchestrator | 2025-04-14 00:59:56.167993 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-04-14 00:59:56.167998 | orchestrator | Monday 14 April 2025 00:55:38 +0000 (0:00:02.833) 0:09:25.149 ********** 2025-04-14 00:59:56.168002 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168007 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168012 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-04-14 00:59:56.168017 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-14 00:59:56.168022 | orchestrator | 2025-04-14 00:59:56.168027 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-04-14 00:59:56.168032 | orchestrator | Monday 14 April 2025 00:55:50 +0000 (0:00:12.566) 0:09:37.716 ********** 2025-04-14 00:59:56.168052 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168057 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168065 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168070 | orchestrator | 2025-04-14 00:59:56.168075 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-04-14 00:59:56.168080 | orchestrator | Monday 14 April 2025 00:55:51 +0000 (0:00:00.511) 0:09:38.227 ********** 2025-04-14 00:59:56.168084 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168089 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168094 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168099 | orchestrator | 2025-04-14 00:59:56.168104 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-14 00:59:56.168108 | orchestrator | Monday 14 April 2025 00:55:52 +0000 (0:00:01.224) 0:09:39.452 ********** 2025-04-14 00:59:56.168113 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.168118 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.168123 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.168127 | orchestrator | 2025-04-14 00:59:56.168132 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-04-14 00:59:56.168137 | orchestrator | Monday 14 April 2025 00:55:53 +0000 (0:00:00.711) 0:09:40.163 ********** 2025-04-14 00:59:56.168142 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.168147 | orchestrator | 2025-04-14 00:59:56.168151 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-04-14 00:59:56.168156 | orchestrator | Monday 14 April 2025 00:55:54 +0000 (0:00:00.881) 0:09:41.045 ********** 2025-04-14 00:59:56.168161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.168166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.168170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.168175 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168180 | orchestrator | 2025-04-14 00:59:56.168185 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-04-14 00:59:56.168190 | orchestrator | Monday 14 April 2025 00:55:54 +0000 (0:00:00.428) 0:09:41.473 ********** 2025-04-14 00:59:56.168194 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168199 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168204 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168209 | orchestrator | 2025-04-14 00:59:56.168213 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-04-14 00:59:56.168218 | orchestrator | Monday 14 April 2025 00:55:55 +0000 (0:00:00.386) 0:09:41.860 ********** 2025-04-14 00:59:56.168223 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168228 | orchestrator | 2025-04-14 00:59:56.168232 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-04-14 00:59:56.168237 | orchestrator | Monday 14 April 2025 00:55:55 +0000 (0:00:00.290) 0:09:42.150 ********** 2025-04-14 00:59:56.168242 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168247 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168252 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168257 | orchestrator | 2025-04-14 00:59:56.168262 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-04-14 00:59:56.168292 | orchestrator | Monday 14 April 2025 00:55:55 +0000 (0:00:00.629) 0:09:42.780 ********** 2025-04-14 00:59:56.168298 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168303 | orchestrator | 2025-04-14 00:59:56.168307 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-04-14 00:59:56.168312 | orchestrator | Monday 14 April 2025 00:55:56 +0000 (0:00:00.229) 0:09:43.010 ********** 2025-04-14 00:59:56.168317 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168322 | orchestrator | 2025-04-14 00:59:56.168326 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-04-14 00:59:56.168331 | orchestrator | Monday 14 April 2025 00:55:56 +0000 (0:00:00.267) 0:09:43.277 ********** 2025-04-14 00:59:56.168339 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168344 | orchestrator | 2025-04-14 00:59:56.168349 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-04-14 00:59:56.168353 | orchestrator | Monday 14 April 2025 00:55:56 +0000 (0:00:00.128) 0:09:43.406 ********** 2025-04-14 00:59:56.168358 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168363 | orchestrator | 2025-04-14 00:59:56.168368 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-04-14 00:59:56.168372 | orchestrator | Monday 14 April 2025 00:55:56 +0000 (0:00:00.238) 0:09:43.645 ********** 2025-04-14 00:59:56.168377 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168382 | orchestrator | 2025-04-14 00:59:56.168387 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-04-14 00:59:56.168392 | orchestrator | Monday 14 April 2025 00:55:57 +0000 (0:00:00.238) 0:09:43.883 ********** 2025-04-14 00:59:56.168396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.168415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.168421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.168426 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168431 | orchestrator | 2025-04-14 00:59:56.168436 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-04-14 00:59:56.168441 | orchestrator | Monday 14 April 2025 00:55:57 +0000 (0:00:00.492) 0:09:44.376 ********** 2025-04-14 00:59:56.168445 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168453 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168458 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168463 | orchestrator | 2025-04-14 00:59:56.168468 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-04-14 00:59:56.168473 | orchestrator | Monday 14 April 2025 00:55:58 +0000 (0:00:00.484) 0:09:44.861 ********** 2025-04-14 00:59:56.168478 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168483 | orchestrator | 2025-04-14 00:59:56.168487 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-04-14 00:59:56.168494 | orchestrator | Monday 14 April 2025 00:55:58 +0000 (0:00:00.253) 0:09:45.114 ********** 2025-04-14 00:59:56.168499 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168504 | orchestrator | 2025-04-14 00:59:56.168509 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-14 00:59:56.168514 | orchestrator | Monday 14 April 2025 00:55:59 +0000 (0:00:00.932) 0:09:46.047 ********** 2025-04-14 00:59:56.168519 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.168524 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.168529 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.168534 | orchestrator | 2025-04-14 00:59:56.168539 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-04-14 00:59:56.168544 | orchestrator | 2025-04-14 00:59:56.168548 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-14 00:59:56.168553 | orchestrator | Monday 14 April 2025 00:56:02 +0000 (0:00:02.939) 0:09:48.986 ********** 2025-04-14 00:59:56.168558 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.168563 | orchestrator | 2025-04-14 00:59:56.168568 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-14 00:59:56.168573 | orchestrator | Monday 14 April 2025 00:56:03 +0000 (0:00:01.306) 0:09:50.292 ********** 2025-04-14 00:59:56.168578 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168583 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.168588 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168593 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.168597 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168602 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.168611 | orchestrator | 2025-04-14 00:59:56.168616 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-14 00:59:56.168621 | orchestrator | Monday 14 April 2025 00:56:04 +0000 (0:00:00.749) 0:09:51.042 ********** 2025-04-14 00:59:56.168625 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.168630 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.168635 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.168640 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.168645 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.168650 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.168655 | orchestrator | 2025-04-14 00:59:56.168659 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-14 00:59:56.168664 | orchestrator | Monday 14 April 2025 00:56:05 +0000 (0:00:01.320) 0:09:52.362 ********** 2025-04-14 00:59:56.168669 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.168674 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.168679 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.168684 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.168689 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.168693 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.168698 | orchestrator | 2025-04-14 00:59:56.168703 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-14 00:59:56.168708 | orchestrator | Monday 14 April 2025 00:56:06 +0000 (0:00:01.081) 0:09:53.443 ********** 2025-04-14 00:59:56.168713 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.168718 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.168723 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.168728 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.168732 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.168737 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.168742 | orchestrator | 2025-04-14 00:59:56.168747 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-14 00:59:56.168754 | orchestrator | Monday 14 April 2025 00:56:07 +0000 (0:00:01.272) 0:09:54.716 ********** 2025-04-14 00:59:56.168760 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168765 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.168769 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.168774 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168779 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168784 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.168789 | orchestrator | 2025-04-14 00:59:56.168794 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-14 00:59:56.168798 | orchestrator | Monday 14 April 2025 00:56:08 +0000 (0:00:00.828) 0:09:55.544 ********** 2025-04-14 00:59:56.168803 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.168808 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.168813 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.168818 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168822 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168827 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168832 | orchestrator | 2025-04-14 00:59:56.168837 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-14 00:59:56.168842 | orchestrator | Monday 14 April 2025 00:56:09 +0000 (0:00:01.124) 0:09:56.668 ********** 2025-04-14 00:59:56.168846 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.168851 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.168856 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.168861 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168878 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168883 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168888 | orchestrator | 2025-04-14 00:59:56.168893 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-14 00:59:56.168898 | orchestrator | Monday 14 April 2025 00:56:10 +0000 (0:00:00.679) 0:09:57.348 ********** 2025-04-14 00:59:56.168903 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.168911 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.168919 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.168924 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168929 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168934 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168938 | orchestrator | 2025-04-14 00:59:56.168943 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-14 00:59:56.168948 | orchestrator | Monday 14 April 2025 00:56:11 +0000 (0:00:00.893) 0:09:58.241 ********** 2025-04-14 00:59:56.168953 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.168958 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.168963 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.168967 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.168972 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.168977 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.168982 | orchestrator | 2025-04-14 00:59:56.168987 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-14 00:59:56.168992 | orchestrator | Monday 14 April 2025 00:56:12 +0000 (0:00:00.659) 0:09:58.901 ********** 2025-04-14 00:59:56.168997 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169002 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169007 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169011 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169016 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169021 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169026 | orchestrator | 2025-04-14 00:59:56.169031 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-14 00:59:56.169051 | orchestrator | Monday 14 April 2025 00:56:12 +0000 (0:00:00.897) 0:09:59.798 ********** 2025-04-14 00:59:56.169056 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.169061 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.169066 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.169070 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.169075 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.169080 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.169085 | orchestrator | 2025-04-14 00:59:56.169090 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-14 00:59:56.169095 | orchestrator | Monday 14 April 2025 00:56:13 +0000 (0:00:01.015) 0:10:00.814 ********** 2025-04-14 00:59:56.169100 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169105 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169109 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169114 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169119 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169124 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169129 | orchestrator | 2025-04-14 00:59:56.169134 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-14 00:59:56.169139 | orchestrator | Monday 14 April 2025 00:56:14 +0000 (0:00:00.901) 0:10:01.715 ********** 2025-04-14 00:59:56.169144 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.169149 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.169154 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.169158 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169163 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169168 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169173 | orchestrator | 2025-04-14 00:59:56.169178 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-14 00:59:56.169188 | orchestrator | Monday 14 April 2025 00:56:15 +0000 (0:00:00.628) 0:10:02.344 ********** 2025-04-14 00:59:56.169193 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169198 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169203 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169207 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.169216 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.169221 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.169226 | orchestrator | 2025-04-14 00:59:56.169230 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-14 00:59:56.169235 | orchestrator | Monday 14 April 2025 00:56:16 +0000 (0:00:00.942) 0:10:03.287 ********** 2025-04-14 00:59:56.169240 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169245 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169250 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169255 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.169260 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.169264 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.169269 | orchestrator | 2025-04-14 00:59:56.169274 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-14 00:59:56.169279 | orchestrator | Monday 14 April 2025 00:56:17 +0000 (0:00:00.666) 0:10:03.954 ********** 2025-04-14 00:59:56.169284 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169289 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169294 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169299 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.169303 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.169308 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.169317 | orchestrator | 2025-04-14 00:59:56.169322 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-14 00:59:56.169327 | orchestrator | Monday 14 April 2025 00:56:17 +0000 (0:00:00.896) 0:10:04.851 ********** 2025-04-14 00:59:56.169332 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169337 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169342 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169347 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169352 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169356 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169361 | orchestrator | 2025-04-14 00:59:56.169366 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-14 00:59:56.169371 | orchestrator | Monday 14 April 2025 00:56:18 +0000 (0:00:00.624) 0:10:05.475 ********** 2025-04-14 00:59:56.169376 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169394 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169400 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169405 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169410 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169415 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169420 | orchestrator | 2025-04-14 00:59:56.169425 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-14 00:59:56.169429 | orchestrator | Monday 14 April 2025 00:56:19 +0000 (0:00:00.919) 0:10:06.395 ********** 2025-04-14 00:59:56.169434 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.169439 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.169444 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.169448 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169453 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169458 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169463 | orchestrator | 2025-04-14 00:59:56.169468 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-14 00:59:56.169473 | orchestrator | Monday 14 April 2025 00:56:20 +0000 (0:00:00.861) 0:10:07.256 ********** 2025-04-14 00:59:56.169478 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.169482 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.169487 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.169492 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.169497 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.169501 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.169506 | orchestrator | 2025-04-14 00:59:56.169511 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-14 00:59:56.169521 | orchestrator | Monday 14 April 2025 00:56:21 +0000 (0:00:01.046) 0:10:08.302 ********** 2025-04-14 00:59:56.169526 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169531 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169536 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169541 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169546 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169551 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169555 | orchestrator | 2025-04-14 00:59:56.169560 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-14 00:59:56.169565 | orchestrator | Monday 14 April 2025 00:56:22 +0000 (0:00:00.697) 0:10:09.000 ********** 2025-04-14 00:59:56.169570 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169575 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169580 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169585 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169589 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169594 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169599 | orchestrator | 2025-04-14 00:59:56.169604 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-14 00:59:56.169609 | orchestrator | Monday 14 April 2025 00:56:23 +0000 (0:00:00.946) 0:10:09.947 ********** 2025-04-14 00:59:56.169614 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169618 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169623 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169628 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169633 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169638 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169643 | orchestrator | 2025-04-14 00:59:56.169648 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-14 00:59:56.169652 | orchestrator | Monday 14 April 2025 00:56:23 +0000 (0:00:00.765) 0:10:10.713 ********** 2025-04-14 00:59:56.169657 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169662 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169667 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169671 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169676 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169681 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169686 | orchestrator | 2025-04-14 00:59:56.169691 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-14 00:59:56.169696 | orchestrator | Monday 14 April 2025 00:56:24 +0000 (0:00:00.889) 0:10:11.602 ********** 2025-04-14 00:59:56.169700 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169705 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169713 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169718 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169722 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169727 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169732 | orchestrator | 2025-04-14 00:59:56.169737 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-14 00:59:56.169742 | orchestrator | Monday 14 April 2025 00:56:25 +0000 (0:00:00.621) 0:10:12.224 ********** 2025-04-14 00:59:56.169747 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169751 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169756 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169761 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169766 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169771 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169776 | orchestrator | 2025-04-14 00:59:56.169781 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-14 00:59:56.169785 | orchestrator | Monday 14 April 2025 00:56:26 +0000 (0:00:00.909) 0:10:13.133 ********** 2025-04-14 00:59:56.169790 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169798 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169803 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169808 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169813 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169817 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169822 | orchestrator | 2025-04-14 00:59:56.169827 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-14 00:59:56.169832 | orchestrator | Monday 14 April 2025 00:56:26 +0000 (0:00:00.623) 0:10:13.757 ********** 2025-04-14 00:59:56.169837 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169842 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169846 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169852 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169860 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169868 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169876 | orchestrator | 2025-04-14 00:59:56.169900 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-14 00:59:56.169909 | orchestrator | Monday 14 April 2025 00:56:27 +0000 (0:00:00.965) 0:10:14.723 ********** 2025-04-14 00:59:56.169917 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169924 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169932 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169939 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169944 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169949 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169954 | orchestrator | 2025-04-14 00:59:56.169959 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-14 00:59:56.169964 | orchestrator | Monday 14 April 2025 00:56:28 +0000 (0:00:00.661) 0:10:15.385 ********** 2025-04-14 00:59:56.169968 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.169973 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.169978 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.169983 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.169987 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.169992 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.169997 | orchestrator | 2025-04-14 00:59:56.170002 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-14 00:59:56.170007 | orchestrator | Monday 14 April 2025 00:56:29 +0000 (0:00:00.928) 0:10:16.314 ********** 2025-04-14 00:59:56.170011 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170063 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170069 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170074 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170079 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170083 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170088 | orchestrator | 2025-04-14 00:59:56.170093 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-14 00:59:56.170098 | orchestrator | Monday 14 April 2025 00:56:30 +0000 (0:00:00.616) 0:10:16.930 ********** 2025-04-14 00:59:56.170103 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170108 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170113 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170118 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170123 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170128 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170133 | orchestrator | 2025-04-14 00:59:56.170138 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-14 00:59:56.170143 | orchestrator | Monday 14 April 2025 00:56:31 +0000 (0:00:01.000) 0:10:17.930 ********** 2025-04-14 00:59:56.170147 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.170152 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-04-14 00:59:56.170162 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170167 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.170172 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-04-14 00:59:56.170177 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170182 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.170187 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-04-14 00:59:56.170192 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170197 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.170202 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.170207 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170212 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.170217 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.170222 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170230 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.170235 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.170239 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170244 | orchestrator | 2025-04-14 00:59:56.170249 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-14 00:59:56.170254 | orchestrator | Monday 14 April 2025 00:56:31 +0000 (0:00:00.751) 0:10:18.682 ********** 2025-04-14 00:59:56.170259 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-04-14 00:59:56.170266 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-04-14 00:59:56.170271 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170276 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-04-14 00:59:56.170281 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-04-14 00:59:56.170286 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170291 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-04-14 00:59:56.170295 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-04-14 00:59:56.170300 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170305 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-14 00:59:56.170310 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-14 00:59:56.170315 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170320 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-14 00:59:56.170324 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-14 00:59:56.170329 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170334 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-14 00:59:56.170339 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-14 00:59:56.170344 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170348 | orchestrator | 2025-04-14 00:59:56.170353 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-14 00:59:56.170358 | orchestrator | Monday 14 April 2025 00:56:33 +0000 (0:00:01.231) 0:10:19.913 ********** 2025-04-14 00:59:56.170363 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170368 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170373 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170377 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170398 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170404 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170409 | orchestrator | 2025-04-14 00:59:56.170414 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-14 00:59:56.170419 | orchestrator | Monday 14 April 2025 00:56:33 +0000 (0:00:00.836) 0:10:20.749 ********** 2025-04-14 00:59:56.170424 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170429 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170434 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170441 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170446 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170451 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170456 | orchestrator | 2025-04-14 00:59:56.170461 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.170466 | orchestrator | Monday 14 April 2025 00:56:34 +0000 (0:00:00.971) 0:10:21.721 ********** 2025-04-14 00:59:56.170471 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170476 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170481 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170486 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170491 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170495 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170500 | orchestrator | 2025-04-14 00:59:56.170505 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.170510 | orchestrator | Monday 14 April 2025 00:56:35 +0000 (0:00:01.003) 0:10:22.725 ********** 2025-04-14 00:59:56.170515 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170519 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170524 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170529 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170534 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170538 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170543 | orchestrator | 2025-04-14 00:59:56.170548 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.170553 | orchestrator | Monday 14 April 2025 00:56:36 +0000 (0:00:00.699) 0:10:23.425 ********** 2025-04-14 00:59:56.170558 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170563 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170567 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170572 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170577 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170582 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170586 | orchestrator | 2025-04-14 00:59:56.170594 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.170599 | orchestrator | Monday 14 April 2025 00:56:37 +0000 (0:00:00.881) 0:10:24.306 ********** 2025-04-14 00:59:56.170604 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170609 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170613 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170618 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170623 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170628 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170632 | orchestrator | 2025-04-14 00:59:56.170637 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.170642 | orchestrator | Monday 14 April 2025 00:56:38 +0000 (0:00:00.676) 0:10:24.982 ********** 2025-04-14 00:59:56.170647 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.170652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.170657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.170661 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170666 | orchestrator | 2025-04-14 00:59:56.170671 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.170676 | orchestrator | Monday 14 April 2025 00:56:38 +0000 (0:00:00.455) 0:10:25.438 ********** 2025-04-14 00:59:56.170681 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.170686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.170690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.170695 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170700 | orchestrator | 2025-04-14 00:59:56.170708 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.170713 | orchestrator | Monday 14 April 2025 00:56:39 +0000 (0:00:00.466) 0:10:25.904 ********** 2025-04-14 00:59:56.170718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.170723 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.170728 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.170732 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170737 | orchestrator | 2025-04-14 00:59:56.170742 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.170747 | orchestrator | Monday 14 April 2025 00:56:39 +0000 (0:00:00.411) 0:10:26.316 ********** 2025-04-14 00:59:56.170752 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170757 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170761 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170766 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170771 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170778 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170783 | orchestrator | 2025-04-14 00:59:56.170788 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.170793 | orchestrator | Monday 14 April 2025 00:56:40 +0000 (0:00:00.935) 0:10:27.252 ********** 2025-04-14 00:59:56.170797 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.170802 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170807 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.170812 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170817 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.170821 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170839 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.170844 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170849 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.170854 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170859 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.170864 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170869 | orchestrator | 2025-04-14 00:59:56.170874 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.170878 | orchestrator | Monday 14 April 2025 00:56:41 +0000 (0:00:00.857) 0:10:28.109 ********** 2025-04-14 00:59:56.170883 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170888 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170893 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170898 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170902 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170907 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170912 | orchestrator | 2025-04-14 00:59:56.170917 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.170922 | orchestrator | Monday 14 April 2025 00:56:42 +0000 (0:00:01.053) 0:10:29.163 ********** 2025-04-14 00:59:56.170927 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170932 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170936 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.170941 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.170946 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.170951 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.170956 | orchestrator | 2025-04-14 00:59:56.170960 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.170965 | orchestrator | Monday 14 April 2025 00:56:42 +0000 (0:00:00.637) 0:10:29.800 ********** 2025-04-14 00:59:56.170970 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-04-14 00:59:56.170975 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.170980 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-04-14 00:59:56.170988 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.170993 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-04-14 00:59:56.170998 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.171002 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.171007 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.171012 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.171017 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.171022 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.171027 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.171031 | orchestrator | 2025-04-14 00:59:56.171067 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.171072 | orchestrator | Monday 14 April 2025 00:56:44 +0000 (0:00:01.156) 0:10:30.957 ********** 2025-04-14 00:59:56.171077 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.171081 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.171086 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.171091 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.171096 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.171101 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.171106 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.171111 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.171116 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.171121 | orchestrator | 2025-04-14 00:59:56.171126 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.171131 | orchestrator | Monday 14 April 2025 00:56:44 +0000 (0:00:00.708) 0:10:31.666 ********** 2025-04-14 00:59:56.171135 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-04-14 00:59:56.171140 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-04-14 00:59:56.171145 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-04-14 00:59:56.171150 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.171155 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-04-14 00:59:56.171160 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-04-14 00:59:56.171164 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-04-14 00:59:56.171169 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.171174 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-04-14 00:59:56.171179 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-04-14 00:59:56.171184 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-04-14 00:59:56.171189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.171194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.171198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.171203 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.171208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 00:59:56.171213 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 00:59:56.171218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 00:59:56.171223 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.171227 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.171232 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 00:59:56.171242 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 00:59:56.171247 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 00:59:56.171255 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.171263 | orchestrator | 2025-04-14 00:59:56.171268 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-14 00:59:56.171273 | orchestrator | Monday 14 April 2025 00:56:46 +0000 (0:00:01.532) 0:10:33.198 ********** 2025-04-14 00:59:56.171277 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.171282 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.171287 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.171292 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.171297 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.171302 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.171307 | orchestrator | 2025-04-14 00:59:56.171311 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-14 00:59:56.171316 | orchestrator | Monday 14 April 2025 00:56:47 +0000 (0:00:01.362) 0:10:34.561 ********** 2025-04-14 00:59:56.171321 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.171326 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.171331 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.171335 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.171340 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.171345 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-14 00:59:56.171394 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.171399 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-14 00:59:56.171404 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.171409 | orchestrator | 2025-04-14 00:59:56.171414 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-14 00:59:56.171419 | orchestrator | Monday 14 April 2025 00:56:49 +0000 (0:00:01.415) 0:10:35.977 ********** 2025-04-14 00:59:56.171423 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.171428 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.171433 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.171442 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.171448 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.171452 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.171457 | orchestrator | 2025-04-14 00:59:56.171462 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-14 00:59:56.171467 | orchestrator | Monday 14 April 2025 00:56:50 +0000 (0:00:01.410) 0:10:37.387 ********** 2025-04-14 00:59:56.171472 | orchestrator | skipping: [testbed-node-0] 2025-04-14 00:59:56.171477 | orchestrator | skipping: [testbed-node-1] 2025-04-14 00:59:56.171481 | orchestrator | skipping: [testbed-node-2] 2025-04-14 00:59:56.171486 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.171491 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.171496 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.171501 | orchestrator | 2025-04-14 00:59:56.171505 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-04-14 00:59:56.171510 | orchestrator | Monday 14 April 2025 00:56:51 +0000 (0:00:01.407) 0:10:38.794 ********** 2025-04-14 00:59:56.171515 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.171520 | orchestrator | 2025-04-14 00:59:56.171527 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-04-14 00:59:56.171532 | orchestrator | Monday 14 April 2025 00:56:55 +0000 (0:00:03.361) 0:10:42.156 ********** 2025-04-14 00:59:56.171537 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.171542 | orchestrator | 2025-04-14 00:59:56.171547 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-04-14 00:59:56.171551 | orchestrator | Monday 14 April 2025 00:56:57 +0000 (0:00:01.842) 0:10:43.998 ********** 2025-04-14 00:59:56.171556 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.171561 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.171566 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.171571 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.171576 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.171584 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.171589 | orchestrator | 2025-04-14 00:59:56.171594 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-04-14 00:59:56.171598 | orchestrator | Monday 14 April 2025 00:56:58 +0000 (0:00:01.714) 0:10:45.713 ********** 2025-04-14 00:59:56.171603 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.171608 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.171613 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.171618 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.171623 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.171627 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.171632 | orchestrator | 2025-04-14 00:59:56.171637 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-04-14 00:59:56.171642 | orchestrator | Monday 14 April 2025 00:57:00 +0000 (0:00:01.334) 0:10:47.048 ********** 2025-04-14 00:59:56.171647 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.171653 | orchestrator | 2025-04-14 00:59:56.171658 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-04-14 00:59:56.171663 | orchestrator | Monday 14 April 2025 00:57:01 +0000 (0:00:01.502) 0:10:48.550 ********** 2025-04-14 00:59:56.171668 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.171673 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.171678 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.171682 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.171687 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.171692 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.171697 | orchestrator | 2025-04-14 00:59:56.171702 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-04-14 00:59:56.171706 | orchestrator | Monday 14 April 2025 00:57:03 +0000 (0:00:01.937) 0:10:50.487 ********** 2025-04-14 00:59:56.171711 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.171716 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.171721 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.171726 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.171730 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.171735 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.171740 | orchestrator | 2025-04-14 00:59:56.171745 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-04-14 00:59:56.171753 | orchestrator | Monday 14 April 2025 00:57:07 +0000 (0:00:04.283) 0:10:54.771 ********** 2025-04-14 00:59:56.171759 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.171764 | orchestrator | 2025-04-14 00:59:56.171769 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-04-14 00:59:56.171774 | orchestrator | Monday 14 April 2025 00:57:09 +0000 (0:00:01.464) 0:10:56.236 ********** 2025-04-14 00:59:56.171778 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.171783 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.171788 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.171793 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.171798 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.171802 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.171807 | orchestrator | 2025-04-14 00:59:56.171812 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-04-14 00:59:56.171817 | orchestrator | Monday 14 April 2025 00:57:10 +0000 (0:00:00.974) 0:10:57.210 ********** 2025-04-14 00:59:56.171822 | orchestrator | changed: [testbed-node-1] 2025-04-14 00:59:56.171826 | orchestrator | changed: [testbed-node-0] 2025-04-14 00:59:56.171831 | orchestrator | changed: [testbed-node-2] 2025-04-14 00:59:56.171836 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.171841 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.171848 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.171853 | orchestrator | 2025-04-14 00:59:56.171858 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-04-14 00:59:56.171863 | orchestrator | Monday 14 April 2025 00:57:13 +0000 (0:00:02.948) 0:11:00.159 ********** 2025-04-14 00:59:56.171868 | orchestrator | ok: [testbed-node-0] 2025-04-14 00:59:56.171873 | orchestrator | ok: [testbed-node-1] 2025-04-14 00:59:56.171877 | orchestrator | ok: [testbed-node-2] 2025-04-14 00:59:56.171885 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.171890 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.171895 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.171899 | orchestrator | 2025-04-14 00:59:56.171904 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-04-14 00:59:56.171909 | orchestrator | 2025-04-14 00:59:56.171914 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-14 00:59:56.171919 | orchestrator | Monday 14 April 2025 00:57:16 +0000 (0:00:02.954) 0:11:03.114 ********** 2025-04-14 00:59:56.171924 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.171931 | orchestrator | 2025-04-14 00:59:56.171936 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-14 00:59:56.171941 | orchestrator | Monday 14 April 2025 00:57:16 +0000 (0:00:00.613) 0:11:03.727 ********** 2025-04-14 00:59:56.171946 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.171951 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.171955 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.171963 | orchestrator | 2025-04-14 00:59:56.171968 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-14 00:59:56.171972 | orchestrator | Monday 14 April 2025 00:57:17 +0000 (0:00:00.660) 0:11:04.387 ********** 2025-04-14 00:59:56.171977 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.171982 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.171987 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.171992 | orchestrator | 2025-04-14 00:59:56.171996 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-14 00:59:56.172001 | orchestrator | Monday 14 April 2025 00:57:18 +0000 (0:00:00.746) 0:11:05.134 ********** 2025-04-14 00:59:56.172006 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.172011 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.172016 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.172021 | orchestrator | 2025-04-14 00:59:56.172026 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-14 00:59:56.172030 | orchestrator | Monday 14 April 2025 00:57:18 +0000 (0:00:00.722) 0:11:05.856 ********** 2025-04-14 00:59:56.172062 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.172067 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.172072 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.172077 | orchestrator | 2025-04-14 00:59:56.172084 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-14 00:59:56.172089 | orchestrator | Monday 14 April 2025 00:57:20 +0000 (0:00:01.092) 0:11:06.949 ********** 2025-04-14 00:59:56.172094 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172099 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172104 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172109 | orchestrator | 2025-04-14 00:59:56.172114 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-14 00:59:56.172118 | orchestrator | Monday 14 April 2025 00:57:20 +0000 (0:00:00.530) 0:11:07.479 ********** 2025-04-14 00:59:56.172123 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172128 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172133 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172138 | orchestrator | 2025-04-14 00:59:56.172142 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-14 00:59:56.172147 | orchestrator | Monday 14 April 2025 00:57:21 +0000 (0:00:00.413) 0:11:07.893 ********** 2025-04-14 00:59:56.172161 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172166 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172170 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172175 | orchestrator | 2025-04-14 00:59:56.172180 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-14 00:59:56.172185 | orchestrator | Monday 14 April 2025 00:57:21 +0000 (0:00:00.429) 0:11:08.323 ********** 2025-04-14 00:59:56.172190 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172194 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172199 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172204 | orchestrator | 2025-04-14 00:59:56.172209 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-14 00:59:56.172214 | orchestrator | Monday 14 April 2025 00:57:22 +0000 (0:00:00.877) 0:11:09.201 ********** 2025-04-14 00:59:56.172218 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172226 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172231 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172236 | orchestrator | 2025-04-14 00:59:56.172241 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-14 00:59:56.172246 | orchestrator | Monday 14 April 2025 00:57:22 +0000 (0:00:00.570) 0:11:09.771 ********** 2025-04-14 00:59:56.172250 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172255 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172260 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172265 | orchestrator | 2025-04-14 00:59:56.172270 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-14 00:59:56.172275 | orchestrator | Monday 14 April 2025 00:57:23 +0000 (0:00:00.515) 0:11:10.286 ********** 2025-04-14 00:59:56.172279 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.172284 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.172289 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.172294 | orchestrator | 2025-04-14 00:59:56.172299 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-14 00:59:56.172304 | orchestrator | Monday 14 April 2025 00:57:24 +0000 (0:00:00.891) 0:11:11.178 ********** 2025-04-14 00:59:56.172308 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172313 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172318 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172323 | orchestrator | 2025-04-14 00:59:56.172328 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-14 00:59:56.172335 | orchestrator | Monday 14 April 2025 00:57:25 +0000 (0:00:01.093) 0:11:12.272 ********** 2025-04-14 00:59:56.172343 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172351 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172359 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172367 | orchestrator | 2025-04-14 00:59:56.172374 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-14 00:59:56.172382 | orchestrator | Monday 14 April 2025 00:57:25 +0000 (0:00:00.523) 0:11:12.795 ********** 2025-04-14 00:59:56.172389 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.172396 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.172403 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.172409 | orchestrator | 2025-04-14 00:59:56.172416 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-14 00:59:56.172423 | orchestrator | Monday 14 April 2025 00:57:26 +0000 (0:00:00.450) 0:11:13.245 ********** 2025-04-14 00:59:56.172429 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.172437 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.172444 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.172452 | orchestrator | 2025-04-14 00:59:56.172459 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-14 00:59:56.172466 | orchestrator | Monday 14 April 2025 00:57:26 +0000 (0:00:00.438) 0:11:13.684 ********** 2025-04-14 00:59:56.172479 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.172486 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.172494 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.172505 | orchestrator | 2025-04-14 00:59:56.172512 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-14 00:59:56.172521 | orchestrator | Monday 14 April 2025 00:57:27 +0000 (0:00:00.890) 0:11:14.574 ********** 2025-04-14 00:59:56.172526 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172531 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172535 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172540 | orchestrator | 2025-04-14 00:59:56.172545 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-14 00:59:56.172550 | orchestrator | Monday 14 April 2025 00:57:28 +0000 (0:00:00.392) 0:11:14.966 ********** 2025-04-14 00:59:56.172554 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172559 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172564 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172569 | orchestrator | 2025-04-14 00:59:56.172573 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-14 00:59:56.172578 | orchestrator | Monday 14 April 2025 00:57:28 +0000 (0:00:00.489) 0:11:15.456 ********** 2025-04-14 00:59:56.172583 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172588 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172592 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172597 | orchestrator | 2025-04-14 00:59:56.172602 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-14 00:59:56.172607 | orchestrator | Monday 14 April 2025 00:57:29 +0000 (0:00:00.462) 0:11:15.919 ********** 2025-04-14 00:59:56.172612 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.172616 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.172621 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.172626 | orchestrator | 2025-04-14 00:59:56.172633 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-14 00:59:56.172638 | orchestrator | Monday 14 April 2025 00:57:29 +0000 (0:00:00.754) 0:11:16.673 ********** 2025-04-14 00:59:56.172643 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172647 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172652 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172657 | orchestrator | 2025-04-14 00:59:56.172662 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-14 00:59:56.172666 | orchestrator | Monday 14 April 2025 00:57:30 +0000 (0:00:00.427) 0:11:17.101 ********** 2025-04-14 00:59:56.172671 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172676 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172681 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172685 | orchestrator | 2025-04-14 00:59:56.172690 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-14 00:59:56.172695 | orchestrator | Monday 14 April 2025 00:57:30 +0000 (0:00:00.513) 0:11:17.615 ********** 2025-04-14 00:59:56.172700 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172705 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172709 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172714 | orchestrator | 2025-04-14 00:59:56.172719 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-14 00:59:56.172723 | orchestrator | Monday 14 April 2025 00:57:31 +0000 (0:00:00.385) 0:11:18.000 ********** 2025-04-14 00:59:56.172728 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172733 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172741 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172746 | orchestrator | 2025-04-14 00:59:56.172751 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-14 00:59:56.172756 | orchestrator | Monday 14 April 2025 00:57:31 +0000 (0:00:00.631) 0:11:18.632 ********** 2025-04-14 00:59:56.172761 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172766 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172774 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172779 | orchestrator | 2025-04-14 00:59:56.172783 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-14 00:59:56.172788 | orchestrator | Monday 14 April 2025 00:57:32 +0000 (0:00:00.347) 0:11:18.980 ********** 2025-04-14 00:59:56.172793 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172798 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172803 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172807 | orchestrator | 2025-04-14 00:59:56.172812 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-14 00:59:56.172817 | orchestrator | Monday 14 April 2025 00:57:32 +0000 (0:00:00.339) 0:11:19.320 ********** 2025-04-14 00:59:56.172822 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172827 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172831 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172836 | orchestrator | 2025-04-14 00:59:56.172841 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-14 00:59:56.172846 | orchestrator | Monday 14 April 2025 00:57:32 +0000 (0:00:00.331) 0:11:19.651 ********** 2025-04-14 00:59:56.172851 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172856 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172861 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172866 | orchestrator | 2025-04-14 00:59:56.172870 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-14 00:59:56.172875 | orchestrator | Monday 14 April 2025 00:57:33 +0000 (0:00:00.584) 0:11:20.236 ********** 2025-04-14 00:59:56.172880 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172885 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172890 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172894 | orchestrator | 2025-04-14 00:59:56.172899 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-14 00:59:56.172904 | orchestrator | Monday 14 April 2025 00:57:33 +0000 (0:00:00.350) 0:11:20.587 ********** 2025-04-14 00:59:56.172909 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172914 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172918 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172923 | orchestrator | 2025-04-14 00:59:56.172928 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-14 00:59:56.172933 | orchestrator | Monday 14 April 2025 00:57:34 +0000 (0:00:00.346) 0:11:20.934 ********** 2025-04-14 00:59:56.172938 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172942 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172947 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172952 | orchestrator | 2025-04-14 00:59:56.172957 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-14 00:59:56.172962 | orchestrator | Monday 14 April 2025 00:57:34 +0000 (0:00:00.317) 0:11:21.251 ********** 2025-04-14 00:59:56.172966 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.172971 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.172976 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.172981 | orchestrator | 2025-04-14 00:59:56.172985 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-14 00:59:56.172990 | orchestrator | Monday 14 April 2025 00:57:35 +0000 (0:00:00.636) 0:11:21.887 ********** 2025-04-14 00:59:56.172995 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.173000 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.173005 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173010 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.173014 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.173019 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173024 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.173066 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.173073 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173081 | orchestrator | 2025-04-14 00:59:56.173086 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-14 00:59:56.173091 | orchestrator | Monday 14 April 2025 00:57:35 +0000 (0:00:00.399) 0:11:22.286 ********** 2025-04-14 00:59:56.173095 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-14 00:59:56.173100 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-14 00:59:56.173105 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173110 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-14 00:59:56.173115 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-14 00:59:56.173120 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173125 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-14 00:59:56.173129 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-14 00:59:56.173134 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173139 | orchestrator | 2025-04-14 00:59:56.173144 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-14 00:59:56.173149 | orchestrator | Monday 14 April 2025 00:57:35 +0000 (0:00:00.382) 0:11:22.669 ********** 2025-04-14 00:59:56.173153 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173158 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173163 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173168 | orchestrator | 2025-04-14 00:59:56.173173 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-14 00:59:56.173177 | orchestrator | Monday 14 April 2025 00:57:36 +0000 (0:00:00.320) 0:11:22.989 ********** 2025-04-14 00:59:56.173182 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173190 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173195 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173200 | orchestrator | 2025-04-14 00:59:56.173205 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.173210 | orchestrator | Monday 14 April 2025 00:57:36 +0000 (0:00:00.646) 0:11:23.635 ********** 2025-04-14 00:59:56.173215 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173219 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173224 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173229 | orchestrator | 2025-04-14 00:59:56.173234 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.173238 | orchestrator | Monday 14 April 2025 00:57:37 +0000 (0:00:00.349) 0:11:23.985 ********** 2025-04-14 00:59:56.173243 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173248 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173253 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173258 | orchestrator | 2025-04-14 00:59:56.173263 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.173270 | orchestrator | Monday 14 April 2025 00:57:37 +0000 (0:00:00.349) 0:11:24.334 ********** 2025-04-14 00:59:56.173274 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173279 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173284 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173289 | orchestrator | 2025-04-14 00:59:56.173294 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.173298 | orchestrator | Monday 14 April 2025 00:57:37 +0000 (0:00:00.331) 0:11:24.666 ********** 2025-04-14 00:59:56.173303 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173308 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173313 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173318 | orchestrator | 2025-04-14 00:59:56.173322 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.173331 | orchestrator | Monday 14 April 2025 00:57:38 +0000 (0:00:00.639) 0:11:25.306 ********** 2025-04-14 00:59:56.173336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.173341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.173345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.173350 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173355 | orchestrator | 2025-04-14 00:59:56.173360 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.173364 | orchestrator | Monday 14 April 2025 00:57:38 +0000 (0:00:00.467) 0:11:25.774 ********** 2025-04-14 00:59:56.173369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.173374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.173379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.173384 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173388 | orchestrator | 2025-04-14 00:59:56.173393 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.173398 | orchestrator | Monday 14 April 2025 00:57:39 +0000 (0:00:00.441) 0:11:26.215 ********** 2025-04-14 00:59:56.173403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.173408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.173412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.173417 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173422 | orchestrator | 2025-04-14 00:59:56.173427 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.173431 | orchestrator | Monday 14 April 2025 00:57:39 +0000 (0:00:00.485) 0:11:26.701 ********** 2025-04-14 00:59:56.173436 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173441 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173446 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173451 | orchestrator | 2025-04-14 00:59:56.173456 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.173460 | orchestrator | Monday 14 April 2025 00:57:40 +0000 (0:00:00.445) 0:11:27.147 ********** 2025-04-14 00:59:56.173465 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.173470 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173475 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.173480 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173485 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.173489 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173494 | orchestrator | 2025-04-14 00:59:56.173499 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.173504 | orchestrator | Monday 14 April 2025 00:57:40 +0000 (0:00:00.462) 0:11:27.610 ********** 2025-04-14 00:59:56.173509 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173513 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173518 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173523 | orchestrator | 2025-04-14 00:59:56.173528 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.173533 | orchestrator | Monday 14 April 2025 00:57:41 +0000 (0:00:00.633) 0:11:28.243 ********** 2025-04-14 00:59:56.173537 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173542 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173547 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173552 | orchestrator | 2025-04-14 00:59:56.173556 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.173561 | orchestrator | Monday 14 April 2025 00:57:41 +0000 (0:00:00.364) 0:11:28.607 ********** 2025-04-14 00:59:56.173566 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.173571 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173579 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.173584 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173588 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.173593 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173598 | orchestrator | 2025-04-14 00:59:56.173605 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.173610 | orchestrator | Monday 14 April 2025 00:57:42 +0000 (0:00:00.595) 0:11:29.203 ********** 2025-04-14 00:59:56.173615 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.173620 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173625 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.173630 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173635 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.173640 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173644 | orchestrator | 2025-04-14 00:59:56.173649 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.173654 | orchestrator | Monday 14 April 2025 00:57:42 +0000 (0:00:00.338) 0:11:29.541 ********** 2025-04-14 00:59:56.173659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.173664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.173669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.173673 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173678 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 00:59:56.173683 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 00:59:56.173688 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 00:59:56.173692 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173697 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 00:59:56.173702 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 00:59:56.173707 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 00:59:56.173712 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173717 | orchestrator | 2025-04-14 00:59:56.173721 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-14 00:59:56.173726 | orchestrator | Monday 14 April 2025 00:57:43 +0000 (0:00:01.038) 0:11:30.580 ********** 2025-04-14 00:59:56.173731 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173736 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173741 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173745 | orchestrator | 2025-04-14 00:59:56.173750 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-14 00:59:56.173755 | orchestrator | Monday 14 April 2025 00:57:44 +0000 (0:00:00.553) 0:11:31.134 ********** 2025-04-14 00:59:56.173760 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.173764 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173769 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-14 00:59:56.173774 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173779 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-14 00:59:56.173784 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173788 | orchestrator | 2025-04-14 00:59:56.173793 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-14 00:59:56.173798 | orchestrator | Monday 14 April 2025 00:57:45 +0000 (0:00:00.880) 0:11:32.014 ********** 2025-04-14 00:59:56.173803 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173808 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173816 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173821 | orchestrator | 2025-04-14 00:59:56.173826 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-14 00:59:56.173831 | orchestrator | Monday 14 April 2025 00:57:45 +0000 (0:00:00.641) 0:11:32.656 ********** 2025-04-14 00:59:56.173836 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173840 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173845 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173850 | orchestrator | 2025-04-14 00:59:56.173858 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-04-14 00:59:56.173870 | orchestrator | Monday 14 April 2025 00:57:46 +0000 (0:00:00.922) 0:11:33.578 ********** 2025-04-14 00:59:56.173878 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.173887 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.173895 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-04-14 00:59:56.173904 | orchestrator | 2025-04-14 00:59:56.173911 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-04-14 00:59:56.173915 | orchestrator | Monday 14 April 2025 00:57:47 +0000 (0:00:00.483) 0:11:34.062 ********** 2025-04-14 00:59:56.173920 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-14 00:59:56.173925 | orchestrator | 2025-04-14 00:59:56.173930 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-04-14 00:59:56.173935 | orchestrator | Monday 14 April 2025 00:57:49 +0000 (0:00:01.869) 0:11:35.931 ********** 2025-04-14 00:59:56.173941 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-04-14 00:59:56.173948 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.173953 | orchestrator | 2025-04-14 00:59:56.173957 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-04-14 00:59:56.173962 | orchestrator | Monday 14 April 2025 00:57:50 +0000 (0:00:00.977) 0:11:36.908 ********** 2025-04-14 00:59:56.173971 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-14 00:59:56.173978 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-14 00:59:56.173983 | orchestrator | 2025-04-14 00:59:56.173988 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-04-14 00:59:56.173992 | orchestrator | Monday 14 April 2025 00:57:57 +0000 (0:00:07.357) 0:11:44.265 ********** 2025-04-14 00:59:56.173997 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-04-14 00:59:56.174002 | orchestrator | 2025-04-14 00:59:56.174007 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-04-14 00:59:56.174011 | orchestrator | Monday 14 April 2025 00:58:00 +0000 (0:00:02.982) 0:11:47.248 ********** 2025-04-14 00:59:56.174082 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.174088 | orchestrator | 2025-04-14 00:59:56.174093 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-04-14 00:59:56.174098 | orchestrator | Monday 14 April 2025 00:58:00 +0000 (0:00:00.559) 0:11:47.808 ********** 2025-04-14 00:59:56.174103 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-14 00:59:56.174107 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-04-14 00:59:56.174112 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-14 00:59:56.174121 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-04-14 00:59:56.174126 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-04-14 00:59:56.174131 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-04-14 00:59:56.174135 | orchestrator | 2025-04-14 00:59:56.174140 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-04-14 00:59:56.174145 | orchestrator | Monday 14 April 2025 00:58:02 +0000 (0:00:01.371) 0:11:49.179 ********** 2025-04-14 00:59:56.174150 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 00:59:56.174154 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.174159 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-14 00:59:56.174164 | orchestrator | 2025-04-14 00:59:56.174169 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-04-14 00:59:56.174174 | orchestrator | Monday 14 April 2025 00:58:04 +0000 (0:00:01.878) 0:11:51.057 ********** 2025-04-14 00:59:56.174179 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-14 00:59:56.174183 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.174188 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.174193 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-14 00:59:56.174198 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-14 00:59:56.174203 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.174208 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-14 00:59:56.174212 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-14 00:59:56.174217 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.174222 | orchestrator | 2025-04-14 00:59:56.174227 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-04-14 00:59:56.174232 | orchestrator | Monday 14 April 2025 00:58:05 +0000 (0:00:01.212) 0:11:52.270 ********** 2025-04-14 00:59:56.174237 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174242 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.174246 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.174251 | orchestrator | 2025-04-14 00:59:56.174256 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-04-14 00:59:56.174261 | orchestrator | Monday 14 April 2025 00:58:05 +0000 (0:00:00.582) 0:11:52.852 ********** 2025-04-14 00:59:56.174266 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.174271 | orchestrator | 2025-04-14 00:59:56.174275 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-04-14 00:59:56.174280 | orchestrator | Monday 14 April 2025 00:58:06 +0000 (0:00:00.563) 0:11:53.416 ********** 2025-04-14 00:59:56.174288 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.174293 | orchestrator | 2025-04-14 00:59:56.174298 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-04-14 00:59:56.174303 | orchestrator | Monday 14 April 2025 00:58:07 +0000 (0:00:00.785) 0:11:54.201 ********** 2025-04-14 00:59:56.174307 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.174312 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.174317 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.174322 | orchestrator | 2025-04-14 00:59:56.174327 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-04-14 00:59:56.174332 | orchestrator | Monday 14 April 2025 00:58:08 +0000 (0:00:01.243) 0:11:55.445 ********** 2025-04-14 00:59:56.174336 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.174341 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.174346 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.174351 | orchestrator | 2025-04-14 00:59:56.174358 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-04-14 00:59:56.174366 | orchestrator | Monday 14 April 2025 00:58:09 +0000 (0:00:01.151) 0:11:56.596 ********** 2025-04-14 00:59:56.174374 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.174379 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.174384 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.174389 | orchestrator | 2025-04-14 00:59:56.174393 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-04-14 00:59:56.174398 | orchestrator | Monday 14 April 2025 00:58:11 +0000 (0:00:01.782) 0:11:58.379 ********** 2025-04-14 00:59:56.174403 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.174408 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.174413 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.174417 | orchestrator | 2025-04-14 00:59:56.174422 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-04-14 00:59:56.174427 | orchestrator | Monday 14 April 2025 00:58:13 +0000 (0:00:02.171) 0:12:00.550 ********** 2025-04-14 00:59:56.174432 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-04-14 00:59:56.174437 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-04-14 00:59:56.174442 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-04-14 00:59:56.174447 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.174451 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.174456 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.174461 | orchestrator | 2025-04-14 00:59:56.174466 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-14 00:59:56.174471 | orchestrator | Monday 14 April 2025 00:58:30 +0000 (0:00:17.152) 0:12:17.703 ********** 2025-04-14 00:59:56.174475 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.174480 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.174485 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.174490 | orchestrator | 2025-04-14 00:59:56.174494 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-04-14 00:59:56.174499 | orchestrator | Monday 14 April 2025 00:58:31 +0000 (0:00:00.665) 0:12:18.369 ********** 2025-04-14 00:59:56.174504 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.174509 | orchestrator | 2025-04-14 00:59:56.174514 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-04-14 00:59:56.174519 | orchestrator | Monday 14 April 2025 00:58:32 +0000 (0:00:00.791) 0:12:19.160 ********** 2025-04-14 00:59:56.174523 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.174528 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.174533 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.174538 | orchestrator | 2025-04-14 00:59:56.174543 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-04-14 00:59:56.174547 | orchestrator | Monday 14 April 2025 00:58:32 +0000 (0:00:00.358) 0:12:19.518 ********** 2025-04-14 00:59:56.174552 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.174557 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.174562 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.174567 | orchestrator | 2025-04-14 00:59:56.174571 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-04-14 00:59:56.174576 | orchestrator | Monday 14 April 2025 00:58:33 +0000 (0:00:01.141) 0:12:20.660 ********** 2025-04-14 00:59:56.174581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.174586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.174591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.174595 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174600 | orchestrator | 2025-04-14 00:59:56.174605 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-04-14 00:59:56.174610 | orchestrator | Monday 14 April 2025 00:58:34 +0000 (0:00:00.938) 0:12:21.598 ********** 2025-04-14 00:59:56.174618 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.174622 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.174627 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.174632 | orchestrator | 2025-04-14 00:59:56.174637 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-14 00:59:56.174642 | orchestrator | Monday 14 April 2025 00:58:35 +0000 (0:00:00.634) 0:12:22.233 ********** 2025-04-14 00:59:56.174646 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.174651 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.174656 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.174661 | orchestrator | 2025-04-14 00:59:56.174666 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-14 00:59:56.174671 | orchestrator | 2025-04-14 00:59:56.174675 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-04-14 00:59:56.174680 | orchestrator | Monday 14 April 2025 00:58:37 +0000 (0:00:02.159) 0:12:24.392 ********** 2025-04-14 00:59:56.174685 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.174693 | orchestrator | 2025-04-14 00:59:56.174697 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-04-14 00:59:56.174702 | orchestrator | Monday 14 April 2025 00:58:38 +0000 (0:00:00.799) 0:12:25.191 ********** 2025-04-14 00:59:56.174707 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174712 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.174717 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.174722 | orchestrator | 2025-04-14 00:59:56.174726 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-04-14 00:59:56.174731 | orchestrator | Monday 14 April 2025 00:58:38 +0000 (0:00:00.321) 0:12:25.512 ********** 2025-04-14 00:59:56.174736 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.174741 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.174748 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.174753 | orchestrator | 2025-04-14 00:59:56.174758 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-04-14 00:59:56.174763 | orchestrator | Monday 14 April 2025 00:58:39 +0000 (0:00:00.738) 0:12:26.251 ********** 2025-04-14 00:59:56.174767 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.174772 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.174783 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.174788 | orchestrator | 2025-04-14 00:59:56.174793 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-04-14 00:59:56.174798 | orchestrator | Monday 14 April 2025 00:58:40 +0000 (0:00:00.781) 0:12:27.033 ********** 2025-04-14 00:59:56.174803 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.174808 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.174813 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.174817 | orchestrator | 2025-04-14 00:59:56.174822 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-04-14 00:59:56.174827 | orchestrator | Monday 14 April 2025 00:58:41 +0000 (0:00:01.119) 0:12:28.153 ********** 2025-04-14 00:59:56.174832 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174837 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.174841 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.174846 | orchestrator | 2025-04-14 00:59:56.174851 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-04-14 00:59:56.174856 | orchestrator | Monday 14 April 2025 00:58:41 +0000 (0:00:00.366) 0:12:28.520 ********** 2025-04-14 00:59:56.174861 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174865 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.174870 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.174875 | orchestrator | 2025-04-14 00:59:56.174880 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-04-14 00:59:56.174885 | orchestrator | Monday 14 April 2025 00:58:42 +0000 (0:00:00.361) 0:12:28.881 ********** 2025-04-14 00:59:56.174893 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174897 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.174902 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.174907 | orchestrator | 2025-04-14 00:59:56.174912 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-04-14 00:59:56.174917 | orchestrator | Monday 14 April 2025 00:58:42 +0000 (0:00:00.669) 0:12:29.551 ********** 2025-04-14 00:59:56.174921 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174926 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.174931 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.174936 | orchestrator | 2025-04-14 00:59:56.174941 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-04-14 00:59:56.174946 | orchestrator | Monday 14 April 2025 00:58:43 +0000 (0:00:00.350) 0:12:29.902 ********** 2025-04-14 00:59:56.174950 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174955 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.174960 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.174965 | orchestrator | 2025-04-14 00:59:56.174974 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-04-14 00:59:56.174979 | orchestrator | Monday 14 April 2025 00:58:43 +0000 (0:00:00.350) 0:12:30.252 ********** 2025-04-14 00:59:56.174983 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.174988 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.174993 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.174998 | orchestrator | 2025-04-14 00:59:56.175003 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-04-14 00:59:56.175008 | orchestrator | Monday 14 April 2025 00:58:43 +0000 (0:00:00.342) 0:12:30.594 ********** 2025-04-14 00:59:56.175012 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.175017 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.175022 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.175027 | orchestrator | 2025-04-14 00:59:56.175031 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-04-14 00:59:56.175049 | orchestrator | Monday 14 April 2025 00:58:44 +0000 (0:00:01.010) 0:12:31.605 ********** 2025-04-14 00:59:56.175058 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175066 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175073 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175081 | orchestrator | 2025-04-14 00:59:56.175089 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-04-14 00:59:56.175097 | orchestrator | Monday 14 April 2025 00:58:45 +0000 (0:00:00.339) 0:12:31.945 ********** 2025-04-14 00:59:56.175104 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175111 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175119 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175127 | orchestrator | 2025-04-14 00:59:56.175135 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-04-14 00:59:56.175143 | orchestrator | Monday 14 April 2025 00:58:45 +0000 (0:00:00.309) 0:12:32.254 ********** 2025-04-14 00:59:56.175148 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.175153 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.175158 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.175163 | orchestrator | 2025-04-14 00:59:56.175167 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-04-14 00:59:56.175172 | orchestrator | Monday 14 April 2025 00:58:45 +0000 (0:00:00.397) 0:12:32.652 ********** 2025-04-14 00:59:56.175177 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.175182 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.175187 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.175191 | orchestrator | 2025-04-14 00:59:56.175196 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-04-14 00:59:56.175201 | orchestrator | Monday 14 April 2025 00:58:46 +0000 (0:00:00.673) 0:12:33.325 ********** 2025-04-14 00:59:56.175210 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.175214 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.175219 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.175224 | orchestrator | 2025-04-14 00:59:56.175229 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-04-14 00:59:56.175234 | orchestrator | Monday 14 April 2025 00:58:46 +0000 (0:00:00.381) 0:12:33.707 ********** 2025-04-14 00:59:56.175239 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175244 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175249 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175254 | orchestrator | 2025-04-14 00:59:56.175258 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-04-14 00:59:56.175263 | orchestrator | Monday 14 April 2025 00:58:47 +0000 (0:00:00.313) 0:12:34.020 ********** 2025-04-14 00:59:56.175268 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175273 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175278 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175283 | orchestrator | 2025-04-14 00:59:56.175291 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-04-14 00:59:56.175296 | orchestrator | Monday 14 April 2025 00:58:47 +0000 (0:00:00.357) 0:12:34.378 ********** 2025-04-14 00:59:56.175300 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175305 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175313 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175318 | orchestrator | 2025-04-14 00:59:56.175325 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-04-14 00:59:56.175330 | orchestrator | Monday 14 April 2025 00:58:48 +0000 (0:00:00.618) 0:12:34.997 ********** 2025-04-14 00:59:56.175335 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.175340 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.175344 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.175349 | orchestrator | 2025-04-14 00:59:56.175354 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-04-14 00:59:56.175359 | orchestrator | Monday 14 April 2025 00:58:48 +0000 (0:00:00.355) 0:12:35.352 ********** 2025-04-14 00:59:56.175364 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175368 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175373 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175378 | orchestrator | 2025-04-14 00:59:56.175383 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-04-14 00:59:56.175388 | orchestrator | Monday 14 April 2025 00:58:48 +0000 (0:00:00.355) 0:12:35.708 ********** 2025-04-14 00:59:56.175392 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175397 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175402 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175407 | orchestrator | 2025-04-14 00:59:56.175412 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-04-14 00:59:56.175417 | orchestrator | Monday 14 April 2025 00:58:49 +0000 (0:00:00.360) 0:12:36.069 ********** 2025-04-14 00:59:56.175421 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175426 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175431 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175436 | orchestrator | 2025-04-14 00:59:56.175441 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-04-14 00:59:56.175445 | orchestrator | Monday 14 April 2025 00:58:49 +0000 (0:00:00.648) 0:12:36.717 ********** 2025-04-14 00:59:56.175450 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175455 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175460 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175465 | orchestrator | 2025-04-14 00:59:56.175469 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-04-14 00:59:56.175474 | orchestrator | Monday 14 April 2025 00:58:50 +0000 (0:00:00.339) 0:12:37.056 ********** 2025-04-14 00:59:56.175479 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175484 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175492 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175497 | orchestrator | 2025-04-14 00:59:56.175502 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-04-14 00:59:56.175506 | orchestrator | Monday 14 April 2025 00:58:50 +0000 (0:00:00.337) 0:12:37.394 ********** 2025-04-14 00:59:56.175511 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175516 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175521 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175526 | orchestrator | 2025-04-14 00:59:56.175530 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-04-14 00:59:56.175535 | orchestrator | Monday 14 April 2025 00:58:50 +0000 (0:00:00.318) 0:12:37.712 ********** 2025-04-14 00:59:56.175540 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175545 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175550 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175555 | orchestrator | 2025-04-14 00:59:56.175559 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-04-14 00:59:56.175564 | orchestrator | Monday 14 April 2025 00:58:51 +0000 (0:00:00.652) 0:12:38.364 ********** 2025-04-14 00:59:56.175569 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175574 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175579 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175584 | orchestrator | 2025-04-14 00:59:56.175589 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-04-14 00:59:56.175593 | orchestrator | Monday 14 April 2025 00:58:51 +0000 (0:00:00.334) 0:12:38.699 ********** 2025-04-14 00:59:56.175601 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175606 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175610 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175615 | orchestrator | 2025-04-14 00:59:56.175620 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-04-14 00:59:56.175625 | orchestrator | Monday 14 April 2025 00:58:52 +0000 (0:00:00.325) 0:12:39.025 ********** 2025-04-14 00:59:56.175630 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175635 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175640 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175644 | orchestrator | 2025-04-14 00:59:56.175649 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-04-14 00:59:56.175654 | orchestrator | Monday 14 April 2025 00:58:52 +0000 (0:00:00.349) 0:12:39.374 ********** 2025-04-14 00:59:56.175659 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175664 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175668 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175673 | orchestrator | 2025-04-14 00:59:56.175678 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-04-14 00:59:56.175683 | orchestrator | Monday 14 April 2025 00:58:53 +0000 (0:00:00.645) 0:12:40.020 ********** 2025-04-14 00:59:56.175688 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175692 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175697 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175702 | orchestrator | 2025-04-14 00:59:56.175707 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-04-14 00:59:56.175714 | orchestrator | Monday 14 April 2025 00:58:53 +0000 (0:00:00.382) 0:12:40.403 ********** 2025-04-14 00:59:56.175719 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.175724 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-04-14 00:59:56.175729 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175734 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.175739 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-04-14 00:59:56.175744 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175749 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.175758 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-04-14 00:59:56.175763 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175767 | orchestrator | 2025-04-14 00:59:56.175772 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-04-14 00:59:56.175777 | orchestrator | Monday 14 April 2025 00:58:53 +0000 (0:00:00.387) 0:12:40.790 ********** 2025-04-14 00:59:56.175782 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-04-14 00:59:56.175790 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-04-14 00:59:56.175794 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175799 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-04-14 00:59:56.175804 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-04-14 00:59:56.175809 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175814 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-04-14 00:59:56.175818 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-04-14 00:59:56.175823 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175828 | orchestrator | 2025-04-14 00:59:56.175833 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-04-14 00:59:56.175838 | orchestrator | Monday 14 April 2025 00:58:54 +0000 (0:00:00.418) 0:12:41.209 ********** 2025-04-14 00:59:56.175842 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175847 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175852 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175857 | orchestrator | 2025-04-14 00:59:56.175864 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-04-14 00:59:56.175872 | orchestrator | Monday 14 April 2025 00:58:55 +0000 (0:00:00.649) 0:12:41.859 ********** 2025-04-14 00:59:56.175880 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175887 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175898 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175906 | orchestrator | 2025-04-14 00:59:56.175913 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 00:59:56.175921 | orchestrator | Monday 14 April 2025 00:58:55 +0000 (0:00:00.354) 0:12:42.213 ********** 2025-04-14 00:59:56.175930 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175935 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175940 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175944 | orchestrator | 2025-04-14 00:59:56.175949 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 00:59:56.175954 | orchestrator | Monday 14 April 2025 00:58:55 +0000 (0:00:00.430) 0:12:42.643 ********** 2025-04-14 00:59:56.175959 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175964 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175968 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.175973 | orchestrator | 2025-04-14 00:59:56.175978 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 00:59:56.175983 | orchestrator | Monday 14 April 2025 00:58:56 +0000 (0:00:00.368) 0:12:43.012 ********** 2025-04-14 00:59:56.175987 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.175992 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.175997 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176002 | orchestrator | 2025-04-14 00:59:56.176007 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 00:59:56.176012 | orchestrator | Monday 14 April 2025 00:58:56 +0000 (0:00:00.691) 0:12:43.703 ********** 2025-04-14 00:59:56.176016 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176021 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176026 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176031 | orchestrator | 2025-04-14 00:59:56.176065 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 00:59:56.176075 | orchestrator | Monday 14 April 2025 00:58:57 +0000 (0:00:00.366) 0:12:44.069 ********** 2025-04-14 00:59:56.176080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.176084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.176089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.176094 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176099 | orchestrator | 2025-04-14 00:59:56.176104 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 00:59:56.176109 | orchestrator | Monday 14 April 2025 00:58:57 +0000 (0:00:00.441) 0:12:44.511 ********** 2025-04-14 00:59:56.176113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.176118 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.176123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.176128 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176133 | orchestrator | 2025-04-14 00:59:56.176137 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 00:59:56.176142 | orchestrator | Monday 14 April 2025 00:58:58 +0000 (0:00:00.468) 0:12:44.979 ********** 2025-04-14 00:59:56.176147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.176152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.176157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.176161 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176166 | orchestrator | 2025-04-14 00:59:56.176171 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.176179 | orchestrator | Monday 14 April 2025 00:58:58 +0000 (0:00:00.460) 0:12:45.440 ********** 2025-04-14 00:59:56.176184 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176189 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176194 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176199 | orchestrator | 2025-04-14 00:59:56.176203 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 00:59:56.176208 | orchestrator | Monday 14 April 2025 00:58:58 +0000 (0:00:00.360) 0:12:45.801 ********** 2025-04-14 00:59:56.176213 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.176218 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176223 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.176227 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176232 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.176237 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176242 | orchestrator | 2025-04-14 00:59:56.176247 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 00:59:56.176252 | orchestrator | Monday 14 April 2025 00:58:59 +0000 (0:00:00.801) 0:12:46.602 ********** 2025-04-14 00:59:56.176256 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176261 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176266 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176271 | orchestrator | 2025-04-14 00:59:56.176276 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 00:59:56.176281 | orchestrator | Monday 14 April 2025 00:59:00 +0000 (0:00:00.374) 0:12:46.977 ********** 2025-04-14 00:59:56.176285 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176290 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176295 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176300 | orchestrator | 2025-04-14 00:59:56.176305 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 00:59:56.176309 | orchestrator | Monday 14 April 2025 00:59:00 +0000 (0:00:00.373) 0:12:47.350 ********** 2025-04-14 00:59:56.176314 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 00:59:56.176319 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176327 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 00:59:56.176332 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176337 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 00:59:56.176342 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176347 | orchestrator | 2025-04-14 00:59:56.176351 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 00:59:56.176356 | orchestrator | Monday 14 April 2025 00:59:00 +0000 (0:00:00.455) 0:12:47.806 ********** 2025-04-14 00:59:56.176361 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.176369 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176374 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.176379 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176384 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-14 00:59:56.176389 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176393 | orchestrator | 2025-04-14 00:59:56.176398 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 00:59:56.176403 | orchestrator | Monday 14 April 2025 00:59:01 +0000 (0:00:00.638) 0:12:48.444 ********** 2025-04-14 00:59:56.176408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.176413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.176417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.176422 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 00:59:56.176427 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 00:59:56.176432 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 00:59:56.176436 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176441 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176446 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 00:59:56.176451 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 00:59:56.176455 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 00:59:56.176460 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176465 | orchestrator | 2025-04-14 00:59:56.176470 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-04-14 00:59:56.176475 | orchestrator | Monday 14 April 2025 00:59:02 +0000 (0:00:00.635) 0:12:49.080 ********** 2025-04-14 00:59:56.176479 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176484 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176489 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176494 | orchestrator | 2025-04-14 00:59:56.176498 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-04-14 00:59:56.176503 | orchestrator | Monday 14 April 2025 00:59:03 +0000 (0:00:00.912) 0:12:49.993 ********** 2025-04-14 00:59:56.176508 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.176513 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176518 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-14 00:59:56.176522 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176527 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-14 00:59:56.176532 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176537 | orchestrator | 2025-04-14 00:59:56.176542 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-04-14 00:59:56.176550 | orchestrator | Monday 14 April 2025 00:59:03 +0000 (0:00:00.621) 0:12:50.614 ********** 2025-04-14 00:59:56.176558 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176573 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176591 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176596 | orchestrator | 2025-04-14 00:59:56.176601 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-04-14 00:59:56.176606 | orchestrator | Monday 14 April 2025 00:59:04 +0000 (0:00:00.834) 0:12:51.449 ********** 2025-04-14 00:59:56.176611 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176615 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176620 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176625 | orchestrator | 2025-04-14 00:59:56.176630 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-04-14 00:59:56.176635 | orchestrator | Monday 14 April 2025 00:59:05 +0000 (0:00:00.581) 0:12:52.031 ********** 2025-04-14 00:59:56.176640 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.176645 | orchestrator | 2025-04-14 00:59:56.176649 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-04-14 00:59:56.176654 | orchestrator | Monday 14 April 2025 00:59:06 +0000 (0:00:00.841) 0:12:52.873 ********** 2025-04-14 00:59:56.176659 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-04-14 00:59:56.176664 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-04-14 00:59:56.176668 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-04-14 00:59:56.176673 | orchestrator | 2025-04-14 00:59:56.176678 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-04-14 00:59:56.176683 | orchestrator | Monday 14 April 2025 00:59:06 +0000 (0:00:00.663) 0:12:53.536 ********** 2025-04-14 00:59:56.176688 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 00:59:56.176692 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.176697 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-14 00:59:56.176702 | orchestrator | 2025-04-14 00:59:56.176707 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-04-14 00:59:56.176712 | orchestrator | Monday 14 April 2025 00:59:08 +0000 (0:00:01.828) 0:12:55.365 ********** 2025-04-14 00:59:56.176716 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-14 00:59:56.176721 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-04-14 00:59:56.176726 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.176731 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-14 00:59:56.176735 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-04-14 00:59:56.176740 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.176745 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-14 00:59:56.176750 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-04-14 00:59:56.176755 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.176760 | orchestrator | 2025-04-14 00:59:56.176765 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-04-14 00:59:56.176770 | orchestrator | Monday 14 April 2025 00:59:09 +0000 (0:00:01.189) 0:12:56.554 ********** 2025-04-14 00:59:56.176778 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176786 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176794 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176802 | orchestrator | 2025-04-14 00:59:56.176811 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-04-14 00:59:56.176819 | orchestrator | Monday 14 April 2025 00:59:10 +0000 (0:00:00.618) 0:12:57.173 ********** 2025-04-14 00:59:56.176825 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176829 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.176834 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.176839 | orchestrator | 2025-04-14 00:59:56.176844 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-04-14 00:59:56.176849 | orchestrator | Monday 14 April 2025 00:59:10 +0000 (0:00:00.368) 0:12:57.542 ********** 2025-04-14 00:59:56.176853 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-04-14 00:59:56.176862 | orchestrator | 2025-04-14 00:59:56.176867 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-04-14 00:59:56.176871 | orchestrator | Monday 14 April 2025 00:59:10 +0000 (0:00:00.257) 0:12:57.799 ********** 2025-04-14 00:59:56.176876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176904 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176909 | orchestrator | 2025-04-14 00:59:56.176913 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-04-14 00:59:56.176918 | orchestrator | Monday 14 April 2025 00:59:11 +0000 (0:00:01.053) 0:12:58.853 ********** 2025-04-14 00:59:56.176923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176960 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.176968 | orchestrator | 2025-04-14 00:59:56.176975 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-04-14 00:59:56.176983 | orchestrator | Monday 14 April 2025 00:59:13 +0000 (0:00:01.007) 0:12:59.861 ********** 2025-04-14 00:59:56.176991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.176999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.177007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.177015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.177023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-04-14 00:59:56.177030 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.177052 | orchestrator | 2025-04-14 00:59:56.177057 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-04-14 00:59:56.177062 | orchestrator | Monday 14 April 2025 00:59:13 +0000 (0:00:00.673) 0:13:00.534 ********** 2025-04-14 00:59:56.177067 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-14 00:59:56.177073 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-14 00:59:56.177082 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-14 00:59:56.177087 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-14 00:59:56.177092 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-04-14 00:59:56.177097 | orchestrator | 2025-04-14 00:59:56.177102 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-04-14 00:59:56.177106 | orchestrator | Monday 14 April 2025 00:59:39 +0000 (0:00:25.371) 0:13:25.906 ********** 2025-04-14 00:59:56.177111 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.177116 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.177121 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.177126 | orchestrator | 2025-04-14 00:59:56.177131 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-04-14 00:59:56.177135 | orchestrator | Monday 14 April 2025 00:59:39 +0000 (0:00:00.522) 0:13:26.428 ********** 2025-04-14 00:59:56.177140 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.177145 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.177150 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.177155 | orchestrator | 2025-04-14 00:59:56.177159 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-04-14 00:59:56.177164 | orchestrator | Monday 14 April 2025 00:59:39 +0000 (0:00:00.348) 0:13:26.777 ********** 2025-04-14 00:59:56.177169 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.177174 | orchestrator | 2025-04-14 00:59:56.177181 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-04-14 00:59:56.177186 | orchestrator | Monday 14 April 2025 00:59:40 +0000 (0:00:00.630) 0:13:27.407 ********** 2025-04-14 00:59:56.177191 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.177196 | orchestrator | 2025-04-14 00:59:56.177201 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-04-14 00:59:56.177206 | orchestrator | Monday 14 April 2025 00:59:41 +0000 (0:00:00.836) 0:13:28.244 ********** 2025-04-14 00:59:56.177211 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.177215 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.177220 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.177225 | orchestrator | 2025-04-14 00:59:56.177230 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-04-14 00:59:56.177235 | orchestrator | Monday 14 April 2025 00:59:42 +0000 (0:00:01.243) 0:13:29.487 ********** 2025-04-14 00:59:56.177239 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.177244 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.177249 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.177254 | orchestrator | 2025-04-14 00:59:56.177259 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-04-14 00:59:56.177267 | orchestrator | Monday 14 April 2025 00:59:43 +0000 (0:00:01.141) 0:13:30.629 ********** 2025-04-14 00:59:56.177272 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.177277 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.177281 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.177286 | orchestrator | 2025-04-14 00:59:56.177291 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-04-14 00:59:56.177296 | orchestrator | Monday 14 April 2025 00:59:45 +0000 (0:00:01.982) 0:13:32.612 ********** 2025-04-14 00:59:56.177301 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.177309 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.177314 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-04-14 00:59:56.177319 | orchestrator | 2025-04-14 00:59:56.177324 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-04-14 00:59:56.177328 | orchestrator | Monday 14 April 2025 00:59:47 +0000 (0:00:01.893) 0:13:34.506 ********** 2025-04-14 00:59:56.177333 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.177338 | orchestrator | skipping: [testbed-node-4] 2025-04-14 00:59:56.177342 | orchestrator | skipping: [testbed-node-5] 2025-04-14 00:59:56.177347 | orchestrator | 2025-04-14 00:59:56.177352 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-04-14 00:59:56.177357 | orchestrator | Monday 14 April 2025 00:59:48 +0000 (0:00:01.210) 0:13:35.716 ********** 2025-04-14 00:59:56.177361 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.177366 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.177371 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.177376 | orchestrator | 2025-04-14 00:59:56.177380 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-04-14 00:59:56.177385 | orchestrator | Monday 14 April 2025 00:59:49 +0000 (0:00:00.670) 0:13:36.387 ********** 2025-04-14 00:59:56.177390 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 00:59:56.177395 | orchestrator | 2025-04-14 00:59:56.177400 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-04-14 00:59:56.177404 | orchestrator | Monday 14 April 2025 00:59:50 +0000 (0:00:00.782) 0:13:37.170 ********** 2025-04-14 00:59:56.177409 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.177414 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.177419 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.177423 | orchestrator | 2025-04-14 00:59:56.177428 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-04-14 00:59:56.177433 | orchestrator | Monday 14 April 2025 00:59:50 +0000 (0:00:00.346) 0:13:37.517 ********** 2025-04-14 00:59:56.177438 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.177442 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.177447 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.177452 | orchestrator | 2025-04-14 00:59:56.177457 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-04-14 00:59:56.177461 | orchestrator | Monday 14 April 2025 00:59:52 +0000 (0:00:01.626) 0:13:39.143 ********** 2025-04-14 00:59:56.177466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 00:59:56.177471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 00:59:56.177476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 00:59:56.177480 | orchestrator | skipping: [testbed-node-3] 2025-04-14 00:59:56.177485 | orchestrator | 2025-04-14 00:59:56.177490 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-04-14 00:59:56.177495 | orchestrator | Monday 14 April 2025 00:59:52 +0000 (0:00:00.676) 0:13:39.819 ********** 2025-04-14 00:59:56.177502 | orchestrator | ok: [testbed-node-3] 2025-04-14 00:59:56.177507 | orchestrator | ok: [testbed-node-4] 2025-04-14 00:59:56.177511 | orchestrator | ok: [testbed-node-5] 2025-04-14 00:59:56.177516 | orchestrator | 2025-04-14 00:59:56.177521 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-04-14 00:59:56.177526 | orchestrator | Monday 14 April 2025 00:59:53 +0000 (0:00:00.351) 0:13:40.170 ********** 2025-04-14 00:59:56.177530 | orchestrator | changed: [testbed-node-3] 2025-04-14 00:59:56.177535 | orchestrator | changed: [testbed-node-4] 2025-04-14 00:59:56.177540 | orchestrator | changed: [testbed-node-5] 2025-04-14 00:59:56.177545 | orchestrator | 2025-04-14 00:59:56.177550 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 00:59:56.177558 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-04-14 00:59:56.177563 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-04-14 00:59:56.177568 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-04-14 00:59:56.177573 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-04-14 00:59:56.177578 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-04-14 00:59:56.177583 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-04-14 00:59:56.177587 | orchestrator | 2025-04-14 00:59:56.177592 | orchestrator | 2025-04-14 00:59:56.177597 | orchestrator | 2025-04-14 00:59:56.177604 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 00:59:59.209200 | orchestrator | Monday 14 April 2025 00:59:54 +0000 (0:00:01.276) 0:13:41.446 ********** 2025-04-14 00:59:59.209327 | orchestrator | =============================================================================== 2025-04-14 00:59:59.209347 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 40.13s 2025-04-14 00:59:59.209362 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 30.65s 2025-04-14 00:59:59.209402 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 25.37s 2025-04-14 00:59:59.209429 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.45s 2025-04-14 00:59:59.209454 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.15s 2025-04-14 00:59:59.209478 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.75s 2025-04-14 00:59:59.209502 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.57s 2025-04-14 00:59:59.209527 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 8.51s 2025-04-14 00:59:59.209550 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.02s 2025-04-14 00:59:59.209574 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.40s 2025-04-14 00:59:59.209598 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 7.36s 2025-04-14 00:59:59.209624 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.91s 2025-04-14 00:59:59.209648 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.76s 2025-04-14 00:59:59.209673 | orchestrator | ceph-config : create ceph initial directories --------------------------- 5.67s 2025-04-14 00:59:59.209700 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 5.31s 2025-04-14 00:59:59.209724 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.28s 2025-04-14 00:59:59.209750 | orchestrator | ceph-facts : find a running mon container ------------------------------- 3.98s 2025-04-14 00:59:59.209775 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 3.79s 2025-04-14 00:59:59.209802 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.78s 2025-04-14 00:59:59.209827 | orchestrator | ceph-container-common : get ceph version -------------------------------- 3.65s 2025-04-14 00:59:59.209854 | orchestrator | 2025-04-14 00:59:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:59.209880 | orchestrator | 2025-04-14 00:59:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 00:59:59.209962 | orchestrator | 2025-04-14 00:59:59 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 00:59:59.212457 | orchestrator | 2025-04-14 00:59:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 00:59:59.213946 | orchestrator | 2025-04-14 00:59:59 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:02.255200 | orchestrator | 2025-04-14 00:59:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:02.255317 | orchestrator | 2025-04-14 01:00:02 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:02.257332 | orchestrator | 2025-04-14 01:00:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:02.257458 | orchestrator | 2025-04-14 01:00:02 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:05.310400 | orchestrator | 2025-04-14 01:00:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:05.310544 | orchestrator | 2025-04-14 01:00:05 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:05.312365 | orchestrator | 2025-04-14 01:00:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:05.313695 | orchestrator | 2025-04-14 01:00:05 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:08.365569 | orchestrator | 2025-04-14 01:00:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:08.365685 | orchestrator | 2025-04-14 01:00:08 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:08.367506 | orchestrator | 2025-04-14 01:00:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:08.369174 | orchestrator | 2025-04-14 01:00:08 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:08.369456 | orchestrator | 2025-04-14 01:00:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:11.426664 | orchestrator | 2025-04-14 01:00:11 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:11.430301 | orchestrator | 2025-04-14 01:00:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:11.431449 | orchestrator | 2025-04-14 01:00:11 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:14.479868 | orchestrator | 2025-04-14 01:00:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:14.480084 | orchestrator | 2025-04-14 01:00:14 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:14.480976 | orchestrator | 2025-04-14 01:00:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:14.487326 | orchestrator | 2025-04-14 01:00:14 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:17.543671 | orchestrator | 2025-04-14 01:00:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:17.543805 | orchestrator | 2025-04-14 01:00:17 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:17.545473 | orchestrator | 2025-04-14 01:00:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:17.545504 | orchestrator | 2025-04-14 01:00:17 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:20.589804 | orchestrator | 2025-04-14 01:00:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:20.589966 | orchestrator | 2025-04-14 01:00:20 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:20.590220 | orchestrator | 2025-04-14 01:00:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:20.591185 | orchestrator | 2025-04-14 01:00:20 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:23.638582 | orchestrator | 2025-04-14 01:00:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:23.639438 | orchestrator | 2025-04-14 01:00:23 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:23.640801 | orchestrator | 2025-04-14 01:00:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:23.643169 | orchestrator | 2025-04-14 01:00:23 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:23.643511 | orchestrator | 2025-04-14 01:00:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:26.685773 | orchestrator | 2025-04-14 01:00:26 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state STARTED 2025-04-14 01:00:26.687081 | orchestrator | 2025-04-14 01:00:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:26.689388 | orchestrator | 2025-04-14 01:00:26 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:29.738852 | orchestrator | 2025-04-14 01:00:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:29.738968 | orchestrator | 2025-04-14 01:00:29 | INFO  | Task e516675d-82e9-4236-87e9-d104f9de6fdf is in state SUCCESS 2025-04-14 01:00:29.739708 | orchestrator | 2025-04-14 01:00:29.739738 | orchestrator | 2025-04-14 01:00:29.739750 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-04-14 01:00:29.739761 | orchestrator | 2025-04-14 01:00:29.739771 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-04-14 01:00:29.739781 | orchestrator | Monday 14 April 2025 00:56:49 +0000 (0:00:00.177) 0:00:00.177 ********** 2025-04-14 01:00:29.739790 | orchestrator | ok: [localhost] => { 2025-04-14 01:00:29.739802 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-04-14 01:00:29.739811 | orchestrator | } 2025-04-14 01:00:29.739821 | orchestrator | 2025-04-14 01:00:29.739831 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-04-14 01:00:29.739840 | orchestrator | Monday 14 April 2025 00:56:49 +0000 (0:00:00.041) 0:00:00.218 ********** 2025-04-14 01:00:29.739849 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-04-14 01:00:29.739860 | orchestrator | ...ignoring 2025-04-14 01:00:29.739870 | orchestrator | 2025-04-14 01:00:29.739879 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-04-14 01:00:29.739888 | orchestrator | Monday 14 April 2025 00:56:52 +0000 (0:00:02.535) 0:00:02.753 ********** 2025-04-14 01:00:29.739897 | orchestrator | skipping: [localhost] 2025-04-14 01:00:29.739906 | orchestrator | 2025-04-14 01:00:29.739916 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-04-14 01:00:29.739925 | orchestrator | Monday 14 April 2025 00:56:52 +0000 (0:00:00.049) 0:00:02.803 ********** 2025-04-14 01:00:29.739934 | orchestrator | ok: [localhost] 2025-04-14 01:00:29.739944 | orchestrator | 2025-04-14 01:00:29.739953 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:00:29.739962 | orchestrator | 2025-04-14 01:00:29.739971 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:00:29.740003 | orchestrator | Monday 14 April 2025 00:56:52 +0000 (0:00:00.158) 0:00:02.962 ********** 2025-04-14 01:00:29.740014 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.740023 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:00:29.740052 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:00:29.740062 | orchestrator | 2025-04-14 01:00:29.740071 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:00:29.740081 | orchestrator | Monday 14 April 2025 00:56:53 +0000 (0:00:00.421) 0:00:03.383 ********** 2025-04-14 01:00:29.740090 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-04-14 01:00:29.740118 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-04-14 01:00:29.740128 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-04-14 01:00:29.740137 | orchestrator | 2025-04-14 01:00:29.740147 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-04-14 01:00:29.740156 | orchestrator | 2025-04-14 01:00:29.740165 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-04-14 01:00:29.740175 | orchestrator | Monday 14 April 2025 00:56:53 +0000 (0:00:00.382) 0:00:03.766 ********** 2025-04-14 01:00:29.740184 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 01:00:29.740194 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-14 01:00:29.740203 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-14 01:00:29.740212 | orchestrator | 2025-04-14 01:00:29.740222 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-14 01:00:29.740231 | orchestrator | Monday 14 April 2025 00:56:54 +0000 (0:00:00.661) 0:00:04.427 ********** 2025-04-14 01:00:29.740240 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:00:29.740251 | orchestrator | 2025-04-14 01:00:29.740260 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-04-14 01:00:29.740270 | orchestrator | Monday 14 April 2025 00:56:54 +0000 (0:00:00.623) 0:00:05.051 ********** 2025-04-14 01:00:29.740293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740378 | orchestrator | 2025-04-14 01:00:29.740388 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-04-14 01:00:29.740398 | orchestrator | Monday 14 April 2025 00:56:59 +0000 (0:00:04.405) 0:00:09.457 ********** 2025-04-14 01:00:29.740407 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.740417 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.740431 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.740440 | orchestrator | 2025-04-14 01:00:29.740450 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-04-14 01:00:29.740459 | orchestrator | Monday 14 April 2025 00:56:59 +0000 (0:00:00.819) 0:00:10.276 ********** 2025-04-14 01:00:29.740469 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.740478 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.740488 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.740497 | orchestrator | 2025-04-14 01:00:29.740506 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-04-14 01:00:29.740515 | orchestrator | Monday 14 April 2025 00:57:01 +0000 (0:00:01.880) 0:00:12.156 ********** 2025-04-14 01:00:29.740531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740698 | orchestrator | 2025-04-14 01:00:29.740707 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-04-14 01:00:29.740717 | orchestrator | Monday 14 April 2025 00:57:07 +0000 (0:00:06.030) 0:00:18.187 ********** 2025-04-14 01:00:29.740726 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.740735 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.740745 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.740754 | orchestrator | 2025-04-14 01:00:29.740763 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-04-14 01:00:29.740773 | orchestrator | Monday 14 April 2025 00:57:09 +0000 (0:00:01.220) 0:00:19.407 ********** 2025-04-14 01:00:29.740782 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:00:29.740791 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.740801 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:00:29.740810 | orchestrator | 2025-04-14 01:00:29.740820 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-04-14 01:00:29.740829 | orchestrator | Monday 14 April 2025 00:57:19 +0000 (0:00:10.258) 0:00:29.665 ********** 2025-04-14 01:00:29.740844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-04-14 01:00:29.740891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-04-14 01:00:29.740922 | orchestrator | 2025-04-14 01:00:29.740931 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-04-14 01:00:29.740941 | orchestrator | Monday 14 April 2025 00:57:24 +0000 (0:00:05.114) 0:00:34.779 ********** 2025-04-14 01:00:29.740950 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.740959 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:00:29.740969 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:00:29.740978 | orchestrator | 2025-04-14 01:00:29.741006 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-04-14 01:00:29.741016 | orchestrator | Monday 14 April 2025 00:57:25 +0000 (0:00:01.091) 0:00:35.871 ********** 2025-04-14 01:00:29.741025 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.741035 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:00:29.741044 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:00:29.741054 | orchestrator | 2025-04-14 01:00:29.741063 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-04-14 01:00:29.741072 | orchestrator | Monday 14 April 2025 00:57:26 +0000 (0:00:00.490) 0:00:36.362 ********** 2025-04-14 01:00:29.741082 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.741091 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:00:29.741100 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:00:29.741109 | orchestrator | 2025-04-14 01:00:29.741118 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-04-14 01:00:29.741128 | orchestrator | Monday 14 April 2025 00:57:26 +0000 (0:00:00.507) 0:00:36.869 ********** 2025-04-14 01:00:29.741138 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-04-14 01:00:29.741152 | orchestrator | ...ignoring 2025-04-14 01:00:29.741162 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-04-14 01:00:29.741171 | orchestrator | ...ignoring 2025-04-14 01:00:29.741181 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-04-14 01:00:29.741190 | orchestrator | ...ignoring 2025-04-14 01:00:29.741200 | orchestrator | 2025-04-14 01:00:29.741209 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-04-14 01:00:29.741218 | orchestrator | Monday 14 April 2025 00:57:37 +0000 (0:00:10.874) 0:00:47.744 ********** 2025-04-14 01:00:29.741228 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.741237 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:00:29.741246 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:00:29.741255 | orchestrator | 2025-04-14 01:00:29.741264 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-04-14 01:00:29.741275 | orchestrator | Monday 14 April 2025 00:57:38 +0000 (0:00:00.656) 0:00:48.400 ********** 2025-04-14 01:00:29.741286 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:00:29.741297 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.741307 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.741317 | orchestrator | 2025-04-14 01:00:29.741332 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-04-14 01:00:29.741342 | orchestrator | Monday 14 April 2025 00:57:38 +0000 (0:00:00.742) 0:00:49.143 ********** 2025-04-14 01:00:29.741353 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:00:29.741364 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.741374 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.741384 | orchestrator | 2025-04-14 01:00:29.741400 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-04-14 01:00:29.741411 | orchestrator | Monday 14 April 2025 00:57:39 +0000 (0:00:00.432) 0:00:49.576 ********** 2025-04-14 01:00:29.741421 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:00:29.741432 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.741442 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.741452 | orchestrator | 2025-04-14 01:00:29.741463 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-04-14 01:00:29.741473 | orchestrator | Monday 14 April 2025 00:57:39 +0000 (0:00:00.647) 0:00:50.223 ********** 2025-04-14 01:00:29.741484 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.741494 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:00:29.741505 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:00:29.741515 | orchestrator | 2025-04-14 01:00:29.741525 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-04-14 01:00:29.741536 | orchestrator | Monday 14 April 2025 00:57:40 +0000 (0:00:00.633) 0:00:50.857 ********** 2025-04-14 01:00:29.741547 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:00:29.741557 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.741567 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.741578 | orchestrator | 2025-04-14 01:00:29.741588 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-14 01:00:29.741598 | orchestrator | Monday 14 April 2025 00:57:41 +0000 (0:00:00.562) 0:00:51.419 ********** 2025-04-14 01:00:29.741609 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.741620 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.741630 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-04-14 01:00:29.741639 | orchestrator | 2025-04-14 01:00:29.741648 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-04-14 01:00:29.741658 | orchestrator | Monday 14 April 2025 00:57:41 +0000 (0:00:00.513) 0:00:51.933 ********** 2025-04-14 01:00:29.741667 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.741676 | orchestrator | 2025-04-14 01:00:29.741686 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-04-14 01:00:29.741700 | orchestrator | Monday 14 April 2025 00:57:53 +0000 (0:00:11.494) 0:01:03.427 ********** 2025-04-14 01:00:29.741709 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.741718 | orchestrator | 2025-04-14 01:00:29.741728 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-04-14 01:00:29.741737 | orchestrator | Monday 14 April 2025 00:57:53 +0000 (0:00:00.110) 0:01:03.538 ********** 2025-04-14 01:00:29.741747 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:00:29.741756 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.741765 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.741775 | orchestrator | 2025-04-14 01:00:29.741784 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-04-14 01:00:29.741793 | orchestrator | Monday 14 April 2025 00:57:54 +0000 (0:00:01.060) 0:01:04.599 ********** 2025-04-14 01:00:29.741803 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.741812 | orchestrator | 2025-04-14 01:00:29.741821 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-04-14 01:00:29.741831 | orchestrator | Monday 14 April 2025 00:58:03 +0000 (0:00:09.403) 0:01:14.002 ********** 2025-04-14 01:00:29.741840 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-04-14 01:00:29.741849 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.741859 | orchestrator | 2025-04-14 01:00:29.741868 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-04-14 01:00:29.741877 | orchestrator | Monday 14 April 2025 00:58:10 +0000 (0:00:07.203) 0:01:21.206 ********** 2025-04-14 01:00:29.741886 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.741896 | orchestrator | 2025-04-14 01:00:29.741905 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-04-14 01:00:29.741914 | orchestrator | Monday 14 April 2025 00:58:13 +0000 (0:00:02.752) 0:01:23.958 ********** 2025-04-14 01:00:29.741923 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.741933 | orchestrator | 2025-04-14 01:00:29.741942 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-04-14 01:00:29.741952 | orchestrator | Monday 14 April 2025 00:58:13 +0000 (0:00:00.116) 0:01:24.074 ********** 2025-04-14 01:00:29.741961 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:00:29.741970 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.741995 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.742010 | orchestrator | 2025-04-14 01:00:29.742144 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-04-14 01:00:29.742155 | orchestrator | Monday 14 April 2025 00:58:14 +0000 (0:00:00.481) 0:01:24.556 ********** 2025-04-14 01:00:29.742165 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:00:29.742174 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:00:29.742184 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:00:29.742193 | orchestrator | 2025-04-14 01:00:29.742202 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-04-14 01:00:29.742212 | orchestrator | Monday 14 April 2025 00:58:14 +0000 (0:00:00.492) 0:01:25.049 ********** 2025-04-14 01:00:29.742221 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-04-14 01:00:29.742230 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.742240 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:00:29.742249 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:00:29.742258 | orchestrator | 2025-04-14 01:00:29.742277 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-04-14 01:00:29.742287 | orchestrator | skipping: no hosts matched 2025-04-14 01:00:29.742296 | orchestrator | 2025-04-14 01:00:29.742305 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-14 01:00:29.742315 | orchestrator | 2025-04-14 01:00:29.742324 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-14 01:00:29.742333 | orchestrator | Monday 14 April 2025 00:58:34 +0000 (0:00:19.722) 0:01:44.771 ********** 2025-04-14 01:00:29.742350 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:00:29.742359 | orchestrator | 2025-04-14 01:00:29.742375 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-14 01:00:29.742384 | orchestrator | Monday 14 April 2025 00:58:55 +0000 (0:00:21.370) 0:02:06.142 ********** 2025-04-14 01:00:29.742394 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:00:29.742403 | orchestrator | 2025-04-14 01:00:29.742412 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-14 01:00:29.742421 | orchestrator | Monday 14 April 2025 00:59:11 +0000 (0:00:15.543) 0:02:21.685 ********** 2025-04-14 01:00:29.742431 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:00:29.742440 | orchestrator | 2025-04-14 01:00:29.742449 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-04-14 01:00:29.742458 | orchestrator | 2025-04-14 01:00:29.742467 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-14 01:00:29.742477 | orchestrator | Monday 14 April 2025 00:59:14 +0000 (0:00:02.841) 0:02:24.527 ********** 2025-04-14 01:00:29.742486 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:00:29.742495 | orchestrator | 2025-04-14 01:00:29.742504 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-14 01:00:29.742514 | orchestrator | Monday 14 April 2025 00:59:35 +0000 (0:00:21.020) 0:02:45.547 ********** 2025-04-14 01:00:29.742523 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:00:29.742532 | orchestrator | 2025-04-14 01:00:29.742541 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-14 01:00:29.742551 | orchestrator | Monday 14 April 2025 00:59:50 +0000 (0:00:15.530) 0:03:01.078 ********** 2025-04-14 01:00:29.742560 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:00:29.742569 | orchestrator | 2025-04-14 01:00:29.742578 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-04-14 01:00:29.742587 | orchestrator | 2025-04-14 01:00:29.742597 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-04-14 01:00:29.742606 | orchestrator | Monday 14 April 2025 00:59:53 +0000 (0:00:02.714) 0:03:03.793 ********** 2025-04-14 01:00:29.742615 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.742625 | orchestrator | 2025-04-14 01:00:29.742634 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-04-14 01:00:29.742643 | orchestrator | Monday 14 April 2025 01:00:07 +0000 (0:00:13.772) 0:03:17.565 ********** 2025-04-14 01:00:29.742653 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.742662 | orchestrator | 2025-04-14 01:00:29.742671 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-04-14 01:00:29.742681 | orchestrator | Monday 14 April 2025 01:00:11 +0000 (0:00:04.581) 0:03:22.146 ********** 2025-04-14 01:00:29.742690 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.742699 | orchestrator | 2025-04-14 01:00:29.742709 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-04-14 01:00:29.742718 | orchestrator | 2025-04-14 01:00:29.742727 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-04-14 01:00:29.742736 | orchestrator | Monday 14 April 2025 01:00:14 +0000 (0:00:02.795) 0:03:24.942 ********** 2025-04-14 01:00:29.742745 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:00:29.742755 | orchestrator | 2025-04-14 01:00:29.742764 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-04-14 01:00:29.742773 | orchestrator | Monday 14 April 2025 01:00:15 +0000 (0:00:00.772) 0:03:25.714 ********** 2025-04-14 01:00:29.742782 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.742792 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.742801 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.742810 | orchestrator | 2025-04-14 01:00:29.742819 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-04-14 01:00:29.742829 | orchestrator | Monday 14 April 2025 01:00:18 +0000 (0:00:02.671) 0:03:28.386 ********** 2025-04-14 01:00:29.742842 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.742852 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.742861 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.742870 | orchestrator | 2025-04-14 01:00:29.742880 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-04-14 01:00:29.742889 | orchestrator | Monday 14 April 2025 01:00:20 +0000 (0:00:02.181) 0:03:30.568 ********** 2025-04-14 01:00:29.742898 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.742907 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.742917 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.742926 | orchestrator | 2025-04-14 01:00:29.742939 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-04-14 01:00:29.742948 | orchestrator | Monday 14 April 2025 01:00:22 +0000 (0:00:02.335) 0:03:32.903 ********** 2025-04-14 01:00:29.742957 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.742966 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.742976 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:00:29.743004 | orchestrator | 2025-04-14 01:00:29.743014 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-04-14 01:00:29.743023 | orchestrator | Monday 14 April 2025 01:00:24 +0000 (0:00:02.162) 0:03:35.066 ********** 2025-04-14 01:00:29.743032 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:00:29.743042 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:00:29.743051 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:00:29.743060 | orchestrator | 2025-04-14 01:00:29.743070 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-04-14 01:00:29.743079 | orchestrator | Monday 14 April 2025 01:00:28 +0000 (0:00:04.137) 0:03:39.204 ********** 2025-04-14 01:00:29.743088 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:00:29.743098 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:00:29.743107 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:00:29.743116 | orchestrator | 2025-04-14 01:00:29.743126 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:00:29.743135 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-04-14 01:00:29.743145 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-04-14 01:00:29.743160 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-14 01:00:32.794241 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-04-14 01:00:32.794365 | orchestrator | 2025-04-14 01:00:32.794385 | orchestrator | 2025-04-14 01:00:32.794402 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:00:32.794418 | orchestrator | Monday 14 April 2025 01:00:29 +0000 (0:00:00.396) 0:03:39.600 ********** 2025-04-14 01:00:32.794432 | orchestrator | =============================================================================== 2025-04-14 01:00:32.794446 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.39s 2025-04-14 01:00:32.794461 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.07s 2025-04-14 01:00:32.794475 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 19.72s 2025-04-14 01:00:32.794490 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.77s 2025-04-14 01:00:32.794504 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.49s 2025-04-14 01:00:32.794518 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.87s 2025-04-14 01:00:32.794532 | orchestrator | mariadb : Copying over galera.cnf -------------------------------------- 10.26s 2025-04-14 01:00:32.794546 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 9.40s 2025-04-14 01:00:32.794587 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.20s 2025-04-14 01:00:32.794602 | orchestrator | mariadb : Copying over config.json files for services ------------------- 6.03s 2025-04-14 01:00:32.794616 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.56s 2025-04-14 01:00:32.794630 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 5.11s 2025-04-14 01:00:32.794644 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.58s 2025-04-14 01:00:32.794658 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.41s 2025-04-14 01:00:32.794672 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 4.14s 2025-04-14 01:00:32.794686 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.80s 2025-04-14 01:00:32.794700 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.75s 2025-04-14 01:00:32.794714 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.67s 2025-04-14 01:00:32.794728 | orchestrator | Check MariaDB service --------------------------------------------------- 2.54s 2025-04-14 01:00:32.794742 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.34s 2025-04-14 01:00:32.794757 | orchestrator | 2025-04-14 01:00:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:32.794774 | orchestrator | 2025-04-14 01:00:29 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:32.794789 | orchestrator | 2025-04-14 01:00:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:32.794824 | orchestrator | 2025-04-14 01:00:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:32.799479 | orchestrator | 2025-04-14 01:00:32 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:32.799961 | orchestrator | 2025-04-14 01:00:32 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:32.801678 | orchestrator | 2025-04-14 01:00:32 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:35.839365 | orchestrator | 2025-04-14 01:00:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:35.839508 | orchestrator | 2025-04-14 01:00:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:35.840818 | orchestrator | 2025-04-14 01:00:35 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:35.841543 | orchestrator | 2025-04-14 01:00:35 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:35.846747 | orchestrator | 2025-04-14 01:00:35 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:38.898749 | orchestrator | 2025-04-14 01:00:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:38.898898 | orchestrator | 2025-04-14 01:00:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:38.901905 | orchestrator | 2025-04-14 01:00:38 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:38.904552 | orchestrator | 2025-04-14 01:00:38 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:38.906164 | orchestrator | 2025-04-14 01:00:38 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:41.945753 | orchestrator | 2025-04-14 01:00:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:41.945901 | orchestrator | 2025-04-14 01:00:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:41.946181 | orchestrator | 2025-04-14 01:00:41 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:41.946212 | orchestrator | 2025-04-14 01:00:41 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:41.946233 | orchestrator | 2025-04-14 01:00:41 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:41.946391 | orchestrator | 2025-04-14 01:00:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:44.990214 | orchestrator | 2025-04-14 01:00:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:44.993033 | orchestrator | 2025-04-14 01:00:44 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:48.055743 | orchestrator | 2025-04-14 01:00:44 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:48.055854 | orchestrator | 2025-04-14 01:00:44 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:48.055869 | orchestrator | 2025-04-14 01:00:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:48.055896 | orchestrator | 2025-04-14 01:00:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:48.056252 | orchestrator | 2025-04-14 01:00:48 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:48.056687 | orchestrator | 2025-04-14 01:00:48 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:48.057432 | orchestrator | 2025-04-14 01:00:48 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:51.088540 | orchestrator | 2025-04-14 01:00:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:51.088675 | orchestrator | 2025-04-14 01:00:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:51.088943 | orchestrator | 2025-04-14 01:00:51 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:51.090288 | orchestrator | 2025-04-14 01:00:51 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:51.091559 | orchestrator | 2025-04-14 01:00:51 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:54.127426 | orchestrator | 2025-04-14 01:00:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:54.127575 | orchestrator | 2025-04-14 01:00:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:54.127743 | orchestrator | 2025-04-14 01:00:54 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:54.129248 | orchestrator | 2025-04-14 01:00:54 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:54.130495 | orchestrator | 2025-04-14 01:00:54 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:00:57.183824 | orchestrator | 2025-04-14 01:00:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:00:57.183994 | orchestrator | 2025-04-14 01:00:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:00:57.184649 | orchestrator | 2025-04-14 01:00:57 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:00:57.184708 | orchestrator | 2025-04-14 01:00:57 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:00:57.185906 | orchestrator | 2025-04-14 01:00:57 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:00.228291 | orchestrator | 2025-04-14 01:00:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:00.228432 | orchestrator | 2025-04-14 01:01:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:00.229362 | orchestrator | 2025-04-14 01:01:00 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:00.230967 | orchestrator | 2025-04-14 01:01:00 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:00.232779 | orchestrator | 2025-04-14 01:01:00 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:03.274361 | orchestrator | 2025-04-14 01:01:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:03.274503 | orchestrator | 2025-04-14 01:01:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:03.274813 | orchestrator | 2025-04-14 01:01:03 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:03.275641 | orchestrator | 2025-04-14 01:01:03 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:03.276662 | orchestrator | 2025-04-14 01:01:03 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:06.326113 | orchestrator | 2025-04-14 01:01:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:06.326247 | orchestrator | 2025-04-14 01:01:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:06.327198 | orchestrator | 2025-04-14 01:01:06 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:06.327249 | orchestrator | 2025-04-14 01:01:06 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:06.328182 | orchestrator | 2025-04-14 01:01:06 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:09.374361 | orchestrator | 2025-04-14 01:01:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:09.374487 | orchestrator | 2025-04-14 01:01:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:09.375099 | orchestrator | 2025-04-14 01:01:09 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:09.376865 | orchestrator | 2025-04-14 01:01:09 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:09.380830 | orchestrator | 2025-04-14 01:01:09 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:12.418545 | orchestrator | 2025-04-14 01:01:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:12.418676 | orchestrator | 2025-04-14 01:01:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:12.420216 | orchestrator | 2025-04-14 01:01:12 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:12.421409 | orchestrator | 2025-04-14 01:01:12 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:12.424052 | orchestrator | 2025-04-14 01:01:12 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:15.476278 | orchestrator | 2025-04-14 01:01:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:15.476416 | orchestrator | 2025-04-14 01:01:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:15.477444 | orchestrator | 2025-04-14 01:01:15 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:15.478176 | orchestrator | 2025-04-14 01:01:15 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:15.479958 | orchestrator | 2025-04-14 01:01:15 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:15.480914 | orchestrator | 2025-04-14 01:01:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:18.557093 | orchestrator | 2025-04-14 01:01:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:18.560694 | orchestrator | 2025-04-14 01:01:18 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:18.563441 | orchestrator | 2025-04-14 01:01:18 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:18.564293 | orchestrator | 2025-04-14 01:01:18 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:21.622596 | orchestrator | 2025-04-14 01:01:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:21.622731 | orchestrator | 2025-04-14 01:01:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:21.625029 | orchestrator | 2025-04-14 01:01:21 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:21.628532 | orchestrator | 2025-04-14 01:01:21 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:21.630257 | orchestrator | 2025-04-14 01:01:21 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:24.695023 | orchestrator | 2025-04-14 01:01:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:24.695162 | orchestrator | 2025-04-14 01:01:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:24.697045 | orchestrator | 2025-04-14 01:01:24 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:24.698267 | orchestrator | 2025-04-14 01:01:24 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:24.699660 | orchestrator | 2025-04-14 01:01:24 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:27.748668 | orchestrator | 2025-04-14 01:01:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:27.748804 | orchestrator | 2025-04-14 01:01:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:27.751076 | orchestrator | 2025-04-14 01:01:27 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:27.753042 | orchestrator | 2025-04-14 01:01:27 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:27.755064 | orchestrator | 2025-04-14 01:01:27 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:30.800095 | orchestrator | 2025-04-14 01:01:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:30.800234 | orchestrator | 2025-04-14 01:01:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:30.800958 | orchestrator | 2025-04-14 01:01:30 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:30.801701 | orchestrator | 2025-04-14 01:01:30 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:30.802826 | orchestrator | 2025-04-14 01:01:30 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:30.802999 | orchestrator | 2025-04-14 01:01:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:33.853565 | orchestrator | 2025-04-14 01:01:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:33.854420 | orchestrator | 2025-04-14 01:01:33 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:33.855652 | orchestrator | 2025-04-14 01:01:33 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:33.856379 | orchestrator | 2025-04-14 01:01:33 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:36.909539 | orchestrator | 2025-04-14 01:01:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:36.909681 | orchestrator | 2025-04-14 01:01:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:36.911588 | orchestrator | 2025-04-14 01:01:36 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:36.913071 | orchestrator | 2025-04-14 01:01:36 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:36.915408 | orchestrator | 2025-04-14 01:01:36 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:39.967707 | orchestrator | 2025-04-14 01:01:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:39.967847 | orchestrator | 2025-04-14 01:01:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:39.971373 | orchestrator | 2025-04-14 01:01:39 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:39.972465 | orchestrator | 2025-04-14 01:01:39 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:39.974468 | orchestrator | 2025-04-14 01:01:39 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:43.028461 | orchestrator | 2025-04-14 01:01:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:43.028628 | orchestrator | 2025-04-14 01:01:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:43.030566 | orchestrator | 2025-04-14 01:01:43 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:43.032585 | orchestrator | 2025-04-14 01:01:43 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:43.035318 | orchestrator | 2025-04-14 01:01:43 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:46.086298 | orchestrator | 2025-04-14 01:01:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:46.086435 | orchestrator | 2025-04-14 01:01:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:46.087458 | orchestrator | 2025-04-14 01:01:46 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:46.090123 | orchestrator | 2025-04-14 01:01:46 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:46.091875 | orchestrator | 2025-04-14 01:01:46 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:49.145059 | orchestrator | 2025-04-14 01:01:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:49.145197 | orchestrator | 2025-04-14 01:01:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:49.146159 | orchestrator | 2025-04-14 01:01:49 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:49.148532 | orchestrator | 2025-04-14 01:01:49 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:49.150304 | orchestrator | 2025-04-14 01:01:49 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:49.150378 | orchestrator | 2025-04-14 01:01:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:52.207542 | orchestrator | 2025-04-14 01:01:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:52.211302 | orchestrator | 2025-04-14 01:01:52 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:52.212968 | orchestrator | 2025-04-14 01:01:52 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:52.213014 | orchestrator | 2025-04-14 01:01:52 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:52.213181 | orchestrator | 2025-04-14 01:01:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:55.265763 | orchestrator | 2025-04-14 01:01:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:55.266982 | orchestrator | 2025-04-14 01:01:55 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:55.268803 | orchestrator | 2025-04-14 01:01:55 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:55.270543 | orchestrator | 2025-04-14 01:01:55 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:01:58.321523 | orchestrator | 2025-04-14 01:01:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:01:58.321659 | orchestrator | 2025-04-14 01:01:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:01:58.322863 | orchestrator | 2025-04-14 01:01:58 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:01:58.323554 | orchestrator | 2025-04-14 01:01:58 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:01:58.325024 | orchestrator | 2025-04-14 01:01:58 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:02:01.382013 | orchestrator | 2025-04-14 01:01:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:01.382180 | orchestrator | 2025-04-14 01:02:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:01.383693 | orchestrator | 2025-04-14 01:02:01 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:02:01.385571 | orchestrator | 2025-04-14 01:02:01 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:01.387413 | orchestrator | 2025-04-14 01:02:01 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:02:04.436794 | orchestrator | 2025-04-14 01:02:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:04.436938 | orchestrator | 2025-04-14 01:02:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:04.438139 | orchestrator | 2025-04-14 01:02:04 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state STARTED 2025-04-14 01:02:04.439611 | orchestrator | 2025-04-14 01:02:04 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:04.442789 | orchestrator | 2025-04-14 01:02:04 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:02:07.496287 | orchestrator | 2025-04-14 01:02:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:07.496408 | orchestrator | 2025-04-14 01:02:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:07.497548 | orchestrator | 2025-04-14 01:02:07 | INFO  | Task a9446816-610d-41d6-ab8b-c249bf303e45 is in state SUCCESS 2025-04-14 01:02:07.499194 | orchestrator | 2025-04-14 01:02:07.499229 | orchestrator | 2025-04-14 01:02:07.499242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:02:07.499326 | orchestrator | 2025-04-14 01:02:07.499342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:02:07.499368 | orchestrator | Monday 14 April 2025 01:00:33 +0000 (0:00:00.354) 0:00:00.354 ********** 2025-04-14 01:02:07.499657 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.499672 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.499684 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.499695 | orchestrator | 2025-04-14 01:02:07.499707 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:02:07.499719 | orchestrator | Monday 14 April 2025 01:00:33 +0000 (0:00:00.409) 0:00:00.763 ********** 2025-04-14 01:02:07.499731 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-04-14 01:02:07.499742 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-04-14 01:02:07.499753 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-04-14 01:02:07.499764 | orchestrator | 2025-04-14 01:02:07.499776 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-04-14 01:02:07.499787 | orchestrator | 2025-04-14 01:02:07.499798 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-14 01:02:07.499810 | orchestrator | Monday 14 April 2025 01:00:34 +0000 (0:00:00.309) 0:00:01.073 ********** 2025-04-14 01:02:07.499821 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:02:07.499833 | orchestrator | 2025-04-14 01:02:07.499845 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-04-14 01:02:07.499856 | orchestrator | Monday 14 April 2025 01:00:34 +0000 (0:00:00.862) 0:00:01.935 ********** 2025-04-14 01:02:07.499872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.499922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.499950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.499974 | orchestrator | 2025-04-14 01:02:07.499986 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-04-14 01:02:07.499998 | orchestrator | Monday 14 April 2025 01:00:36 +0000 (0:00:01.977) 0:00:03.912 ********** 2025-04-14 01:02:07.500009 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.500021 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.500032 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.500043 | orchestrator | 2025-04-14 01:02:07.500055 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-14 01:02:07.500066 | orchestrator | Monday 14 April 2025 01:00:37 +0000 (0:00:00.366) 0:00:04.279 ********** 2025-04-14 01:02:07.500085 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-14 01:02:07.500097 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-04-14 01:02:07.500108 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-04-14 01:02:07.500178 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-04-14 01:02:07.500196 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-04-14 01:02:07.500208 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-04-14 01:02:07.500219 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-04-14 01:02:07.500230 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-14 01:02:07.500241 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-04-14 01:02:07.500252 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-04-14 01:02:07.500263 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-04-14 01:02:07.500274 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-04-14 01:02:07.500285 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-04-14 01:02:07.500296 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-04-14 01:02:07.500307 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-04-14 01:02:07.500318 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-04-14 01:02:07.500394 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-04-14 01:02:07.500407 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-04-14 01:02:07.500419 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-04-14 01:02:07.500436 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-04-14 01:02:07.500449 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-04-14 01:02:07.500462 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-04-14 01:02:07.500479 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-04-14 01:02:07.500492 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-04-14 01:02:07.500504 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-04-14 01:02:07.500523 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-04-14 01:02:07.500536 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-04-14 01:02:07.500549 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-04-14 01:02:07.500561 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-04-14 01:02:07.500573 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-04-14 01:02:07.500585 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-04-14 01:02:07.500597 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-04-14 01:02:07.500609 | orchestrator | 2025-04-14 01:02:07.500622 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.500634 | orchestrator | Monday 14 April 2025 01:00:38 +0000 (0:00:01.121) 0:00:05.400 ********** 2025-04-14 01:02:07.500647 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.500659 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.500671 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.500683 | orchestrator | 2025-04-14 01:02:07.500695 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.500708 | orchestrator | Monday 14 April 2025 01:00:38 +0000 (0:00:00.509) 0:00:05.910 ********** 2025-04-14 01:02:07.500720 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.500733 | orchestrator | 2025-04-14 01:02:07.500752 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.500764 | orchestrator | Monday 14 April 2025 01:00:39 +0000 (0:00:00.110) 0:00:06.020 ********** 2025-04-14 01:02:07.500776 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.500789 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.500801 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.500813 | orchestrator | 2025-04-14 01:02:07.500826 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.500838 | orchestrator | Monday 14 April 2025 01:00:39 +0000 (0:00:00.501) 0:00:06.521 ********** 2025-04-14 01:02:07.500850 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.500862 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.500874 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.500913 | orchestrator | 2025-04-14 01:02:07.500926 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.500937 | orchestrator | Monday 14 April 2025 01:00:39 +0000 (0:00:00.296) 0:00:06.818 ********** 2025-04-14 01:02:07.500948 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.500964 | orchestrator | 2025-04-14 01:02:07.500976 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.500987 | orchestrator | Monday 14 April 2025 01:00:40 +0000 (0:00:00.264) 0:00:07.083 ********** 2025-04-14 01:02:07.500998 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501010 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.501021 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.501035 | orchestrator | 2025-04-14 01:02:07.501048 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.501061 | orchestrator | Monday 14 April 2025 01:00:40 +0000 (0:00:00.390) 0:00:07.474 ********** 2025-04-14 01:02:07.501074 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.501087 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.501104 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.501117 | orchestrator | 2025-04-14 01:02:07.501130 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.501143 | orchestrator | Monday 14 April 2025 01:00:41 +0000 (0:00:00.588) 0:00:08.062 ********** 2025-04-14 01:02:07.501156 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501168 | orchestrator | 2025-04-14 01:02:07.501181 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.501194 | orchestrator | Monday 14 April 2025 01:00:41 +0000 (0:00:00.145) 0:00:08.208 ********** 2025-04-14 01:02:07.501207 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501220 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.501232 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.501245 | orchestrator | 2025-04-14 01:02:07.501258 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.501271 | orchestrator | Monday 14 April 2025 01:00:41 +0000 (0:00:00.456) 0:00:08.664 ********** 2025-04-14 01:02:07.501283 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.501296 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.501309 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.501322 | orchestrator | 2025-04-14 01:02:07.501335 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.501347 | orchestrator | Monday 14 April 2025 01:00:42 +0000 (0:00:00.478) 0:00:09.143 ********** 2025-04-14 01:02:07.501360 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501373 | orchestrator | 2025-04-14 01:02:07.501384 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.501395 | orchestrator | Monday 14 April 2025 01:00:42 +0000 (0:00:00.113) 0:00:09.256 ********** 2025-04-14 01:02:07.501406 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501418 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.501429 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.501440 | orchestrator | 2025-04-14 01:02:07.501451 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.501462 | orchestrator | Monday 14 April 2025 01:00:42 +0000 (0:00:00.441) 0:00:09.698 ********** 2025-04-14 01:02:07.501473 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.501484 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.501495 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.501507 | orchestrator | 2025-04-14 01:02:07.501518 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.501529 | orchestrator | Monday 14 April 2025 01:00:43 +0000 (0:00:00.337) 0:00:10.035 ********** 2025-04-14 01:02:07.501540 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501551 | orchestrator | 2025-04-14 01:02:07.501562 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.501574 | orchestrator | Monday 14 April 2025 01:00:43 +0000 (0:00:00.236) 0:00:10.272 ********** 2025-04-14 01:02:07.501585 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501596 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.501607 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.501619 | orchestrator | 2025-04-14 01:02:07.501634 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.501646 | orchestrator | Monday 14 April 2025 01:00:43 +0000 (0:00:00.298) 0:00:10.571 ********** 2025-04-14 01:02:07.501657 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.501668 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.501679 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.501691 | orchestrator | 2025-04-14 01:02:07.501702 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.501713 | orchestrator | Monday 14 April 2025 01:00:44 +0000 (0:00:00.544) 0:00:11.115 ********** 2025-04-14 01:02:07.501724 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501735 | orchestrator | 2025-04-14 01:02:07.501747 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.501763 | orchestrator | Monday 14 April 2025 01:00:44 +0000 (0:00:00.140) 0:00:11.255 ********** 2025-04-14 01:02:07.501775 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501786 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.501797 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.501809 | orchestrator | 2025-04-14 01:02:07.501820 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.501831 | orchestrator | Monday 14 April 2025 01:00:44 +0000 (0:00:00.411) 0:00:11.666 ********** 2025-04-14 01:02:07.501847 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.501859 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.501870 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.501936 | orchestrator | 2025-04-14 01:02:07.501950 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.501962 | orchestrator | Monday 14 April 2025 01:00:45 +0000 (0:00:00.468) 0:00:12.135 ********** 2025-04-14 01:02:07.501973 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.501984 | orchestrator | 2025-04-14 01:02:07.501996 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.502007 | orchestrator | Monday 14 April 2025 01:00:45 +0000 (0:00:00.154) 0:00:12.290 ********** 2025-04-14 01:02:07.502060 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502075 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.502086 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.502098 | orchestrator | 2025-04-14 01:02:07.502109 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.502120 | orchestrator | Monday 14 April 2025 01:00:45 +0000 (0:00:00.472) 0:00:12.762 ********** 2025-04-14 01:02:07.502131 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.502142 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.502154 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.502165 | orchestrator | 2025-04-14 01:02:07.502176 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.502187 | orchestrator | Monday 14 April 2025 01:00:46 +0000 (0:00:00.475) 0:00:13.238 ********** 2025-04-14 01:02:07.502198 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502210 | orchestrator | 2025-04-14 01:02:07.502221 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.502232 | orchestrator | Monday 14 April 2025 01:00:46 +0000 (0:00:00.130) 0:00:13.368 ********** 2025-04-14 01:02:07.502243 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502254 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.502265 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.502276 | orchestrator | 2025-04-14 01:02:07.502288 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.502299 | orchestrator | Monday 14 April 2025 01:00:46 +0000 (0:00:00.432) 0:00:13.801 ********** 2025-04-14 01:02:07.502310 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.502322 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.502333 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.502344 | orchestrator | 2025-04-14 01:02:07.502355 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.502366 | orchestrator | Monday 14 April 2025 01:00:47 +0000 (0:00:00.341) 0:00:14.142 ********** 2025-04-14 01:02:07.502377 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502389 | orchestrator | 2025-04-14 01:02:07.502400 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.502411 | orchestrator | Monday 14 April 2025 01:00:47 +0000 (0:00:00.115) 0:00:14.258 ********** 2025-04-14 01:02:07.502422 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502433 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.502445 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.502464 | orchestrator | 2025-04-14 01:02:07.502476 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.502494 | orchestrator | Monday 14 April 2025 01:00:47 +0000 (0:00:00.454) 0:00:14.712 ********** 2025-04-14 01:02:07.502505 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.502517 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.502529 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.502540 | orchestrator | 2025-04-14 01:02:07.502551 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.502563 | orchestrator | Monday 14 April 2025 01:00:48 +0000 (0:00:00.463) 0:00:15.176 ********** 2025-04-14 01:02:07.502574 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502585 | orchestrator | 2025-04-14 01:02:07.502596 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.502607 | orchestrator | Monday 14 April 2025 01:00:48 +0000 (0:00:00.118) 0:00:15.294 ********** 2025-04-14 01:02:07.502618 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502629 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.502641 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.502652 | orchestrator | 2025-04-14 01:02:07.502663 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-04-14 01:02:07.502674 | orchestrator | Monday 14 April 2025 01:00:48 +0000 (0:00:00.491) 0:00:15.785 ********** 2025-04-14 01:02:07.502685 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:07.502696 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:07.502708 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:07.502719 | orchestrator | 2025-04-14 01:02:07.502739 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-04-14 01:02:07.502751 | orchestrator | Monday 14 April 2025 01:00:49 +0000 (0:00:00.478) 0:00:16.264 ********** 2025-04-14 01:02:07.502762 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502773 | orchestrator | 2025-04-14 01:02:07.502784 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-04-14 01:02:07.502795 | orchestrator | Monday 14 April 2025 01:00:49 +0000 (0:00:00.116) 0:00:16.381 ********** 2025-04-14 01:02:07.502806 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.502817 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.502829 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.502840 | orchestrator | 2025-04-14 01:02:07.502851 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-04-14 01:02:07.502862 | orchestrator | Monday 14 April 2025 01:00:49 +0000 (0:00:00.468) 0:00:16.849 ********** 2025-04-14 01:02:07.502873 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:02:07.502898 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:02:07.502910 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:07.502921 | orchestrator | 2025-04-14 01:02:07.502932 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-04-14 01:02:07.502943 | orchestrator | Monday 14 April 2025 01:00:52 +0000 (0:00:02.252) 0:00:19.102 ********** 2025-04-14 01:02:07.502954 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-14 01:02:07.502972 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-14 01:02:07.502983 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-04-14 01:02:07.502994 | orchestrator | 2025-04-14 01:02:07.503006 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-04-14 01:02:07.503017 | orchestrator | Monday 14 April 2025 01:00:54 +0000 (0:00:02.248) 0:00:21.350 ********** 2025-04-14 01:02:07.503028 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-14 01:02:07.503040 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-14 01:02:07.503051 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-04-14 01:02:07.503062 | orchestrator | 2025-04-14 01:02:07.503073 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-04-14 01:02:07.503090 | orchestrator | Monday 14 April 2025 01:00:57 +0000 (0:00:02.667) 0:00:24.018 ********** 2025-04-14 01:02:07.503101 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-14 01:02:07.503113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-14 01:02:07.503124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-04-14 01:02:07.503135 | orchestrator | 2025-04-14 01:02:07.503146 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-04-14 01:02:07.503157 | orchestrator | Monday 14 April 2025 01:00:59 +0000 (0:00:02.244) 0:00:26.262 ********** 2025-04-14 01:02:07.503168 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.503180 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.503191 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.503202 | orchestrator | 2025-04-14 01:02:07.503213 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-04-14 01:02:07.503224 | orchestrator | Monday 14 April 2025 01:00:59 +0000 (0:00:00.271) 0:00:26.534 ********** 2025-04-14 01:02:07.503235 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.503246 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.503257 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.503268 | orchestrator | 2025-04-14 01:02:07.503279 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-14 01:02:07.503290 | orchestrator | Monday 14 April 2025 01:01:00 +0000 (0:00:00.430) 0:00:26.965 ********** 2025-04-14 01:02:07.503301 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:02:07.503313 | orchestrator | 2025-04-14 01:02:07.503324 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-04-14 01:02:07.503335 | orchestrator | Monday 14 April 2025 01:01:00 +0000 (0:00:00.948) 0:00:27.913 ********** 2025-04-14 01:02:07.503352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.503376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.503400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.503417 | orchestrator | 2025-04-14 01:02:07.503429 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-04-14 01:02:07.503441 | orchestrator | Monday 14 April 2025 01:01:02 +0000 (0:00:01.824) 0:00:29.738 ********** 2025-04-14 01:02:07.503452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 01:02:07.503468 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.503487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 01:02:07.503504 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.503516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 01:02:07.503532 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.503543 | orchestrator | 2025-04-14 01:02:07.503555 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-04-14 01:02:07.503566 | orchestrator | Monday 14 April 2025 01:01:03 +0000 (0:00:00.844) 0:00:30.583 ********** 2025-04-14 01:02:07.503585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 01:02:07.503603 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.503614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 01:02:07.503635 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.503655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-04-14 01:02:07.503671 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.503683 | orchestrator | 2025-04-14 01:02:07.503694 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-04-14 01:02:07.503705 | orchestrator | Monday 14 April 2025 01:01:05 +0000 (0:00:01.622) 0:00:32.205 ********** 2025-04-14 01:02:07.503721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.503743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.503761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-04-14 01:02:07.503796 | orchestrator | 2025-04-14 01:02:07.503808 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-14 01:02:07.503819 | orchestrator | Monday 14 April 2025 01:01:10 +0000 (0:00:05.295) 0:00:37.500 ********** 2025-04-14 01:02:07.503830 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:07.503842 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:07.503853 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:07.503864 | orchestrator | 2025-04-14 01:02:07.503875 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-04-14 01:02:07.503928 | orchestrator | Monday 14 April 2025 01:01:11 +0000 (0:00:00.551) 0:00:38.052 ********** 2025-04-14 01:02:07.503941 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:02:07.503952 | orchestrator | 2025-04-14 01:02:07.503963 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-04-14 01:02:07.503975 | orchestrator | Monday 14 April 2025 01:01:11 +0000 (0:00:00.655) 0:00:38.708 ********** 2025-04-14 01:02:07.503986 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:07.503997 | orchestrator | 2025-04-14 01:02:07.504013 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-04-14 01:02:07.504025 | orchestrator | Monday 14 April 2025 01:01:14 +0000 (0:00:02.580) 0:00:41.288 ********** 2025-04-14 01:02:07.504036 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:07.504047 | orchestrator | 2025-04-14 01:02:07.504058 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-04-14 01:02:07.504069 | orchestrator | Monday 14 April 2025 01:01:16 +0000 (0:00:02.149) 0:00:43.438 ********** 2025-04-14 01:02:07.504081 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:07.504092 | orchestrator | 2025-04-14 01:02:07.504103 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-14 01:02:07.504114 | orchestrator | Monday 14 April 2025 01:01:30 +0000 (0:00:13.566) 0:00:57.004 ********** 2025-04-14 01:02:07.504125 | orchestrator | 2025-04-14 01:02:07.504136 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-14 01:02:07.504147 | orchestrator | Monday 14 April 2025 01:01:30 +0000 (0:00:00.061) 0:00:57.066 ********** 2025-04-14 01:02:07.504158 | orchestrator | 2025-04-14 01:02:07.504169 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-04-14 01:02:07.504181 | orchestrator | Monday 14 April 2025 01:01:30 +0000 (0:00:00.202) 0:00:57.268 ********** 2025-04-14 01:02:07.504192 | orchestrator | 2025-04-14 01:02:07.504203 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-04-14 01:02:07.504214 | orchestrator | Monday 14 April 2025 01:01:30 +0000 (0:00:00.065) 0:00:57.333 ********** 2025-04-14 01:02:07.504225 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:07.504237 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:02:07.504248 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:02:07.504265 | orchestrator | 2025-04-14 01:02:07.504277 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:02:07.504288 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-14 01:02:07.504300 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-14 01:02:07.504311 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-04-14 01:02:07.504322 | orchestrator | 2025-04-14 01:02:07.504333 | orchestrator | 2025-04-14 01:02:07.504345 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:02:07.504356 | orchestrator | Monday 14 April 2025 01:02:05 +0000 (0:00:35.134) 0:01:32.467 ********** 2025-04-14 01:02:07.504367 | orchestrator | =============================================================================== 2025-04-14 01:02:07.504378 | orchestrator | horizon : Restart horizon container ------------------------------------ 35.13s 2025-04-14 01:02:07.504389 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.57s 2025-04-14 01:02:07.504401 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.30s 2025-04-14 01:02:07.504412 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.67s 2025-04-14 01:02:07.504423 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.58s 2025-04-14 01:02:07.504434 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.25s 2025-04-14 01:02:07.504445 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.25s 2025-04-14 01:02:07.504456 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.24s 2025-04-14 01:02:07.504467 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.15s 2025-04-14 01:02:07.504478 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.98s 2025-04-14 01:02:07.504488 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.82s 2025-04-14 01:02:07.504498 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.62s 2025-04-14 01:02:07.504508 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.12s 2025-04-14 01:02:07.504523 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.95s 2025-04-14 01:02:10.561121 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2025-04-14 01:02:10.561267 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.84s 2025-04-14 01:02:10.561290 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.66s 2025-04-14 01:02:10.561306 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2025-04-14 01:02:10.561322 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2025-04-14 01:02:10.561337 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-04-14 01:02:10.561353 | orchestrator | 2025-04-14 01:02:07 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:10.561368 | orchestrator | 2025-04-14 01:02:07 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:02:10.561384 | orchestrator | 2025-04-14 01:02:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:10.561418 | orchestrator | 2025-04-14 01:02:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:10.563417 | orchestrator | 2025-04-14 01:02:10 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:10.564045 | orchestrator | 2025-04-14 01:02:10 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state STARTED 2025-04-14 01:02:10.564098 | orchestrator | 2025-04-14 01:02:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:13.612751 | orchestrator | 2025-04-14 01:02:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:13.613992 | orchestrator | 2025-04-14 01:02:13 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:13.616786 | orchestrator | 2025-04-14 01:02:13 | INFO  | Task 6576bd67-1802-42a9-a079-143a4e4508f2 is in state SUCCESS 2025-04-14 01:02:13.619216 | orchestrator | 2025-04-14 01:02:13.619273 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-14 01:02:13.619290 | orchestrator | 2025-04-14 01:02:13.619304 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-04-14 01:02:13.619319 | orchestrator | 2025-04-14 01:02:13.619333 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-14 01:02:13.619347 | orchestrator | Monday 14 April 2025 00:59:59 +0000 (0:00:01.182) 0:00:01.182 ********** 2025-04-14 01:02:13.619363 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:02:13.619390 | orchestrator | 2025-04-14 01:02:13.619415 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-14 01:02:13.619440 | orchestrator | Monday 14 April 2025 01:00:00 +0000 (0:00:00.567) 0:00:01.749 ********** 2025-04-14 01:02:13.619465 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-04-14 01:02:13.619492 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-04-14 01:02:13.619517 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-04-14 01:02:13.619541 | orchestrator | 2025-04-14 01:02:13.619567 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-14 01:02:13.619582 | orchestrator | Monday 14 April 2025 01:00:01 +0000 (0:00:00.885) 0:00:02.635 ********** 2025-04-14 01:02:13.619597 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:02:13.619611 | orchestrator | 2025-04-14 01:02:13.619625 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-14 01:02:13.619639 | orchestrator | Monday 14 April 2025 01:00:02 +0000 (0:00:00.771) 0:00:03.407 ********** 2025-04-14 01:02:13.619653 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.619668 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.619682 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.619696 | orchestrator | 2025-04-14 01:02:13.619710 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-14 01:02:13.619724 | orchestrator | Monday 14 April 2025 01:00:02 +0000 (0:00:00.728) 0:00:04.135 ********** 2025-04-14 01:02:13.619737 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.619752 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.619767 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.619782 | orchestrator | 2025-04-14 01:02:13.619798 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-14 01:02:13.619814 | orchestrator | Monday 14 April 2025 01:00:03 +0000 (0:00:00.327) 0:00:04.462 ********** 2025-04-14 01:02:13.619830 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.619846 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.620026 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.620052 | orchestrator | 2025-04-14 01:02:13.620067 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-14 01:02:13.620081 | orchestrator | Monday 14 April 2025 01:00:04 +0000 (0:00:00.847) 0:00:05.310 ********** 2025-04-14 01:02:13.620095 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.620109 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.620123 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.620147 | orchestrator | 2025-04-14 01:02:13.620162 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-14 01:02:13.620197 | orchestrator | Monday 14 April 2025 01:00:04 +0000 (0:00:00.376) 0:00:05.686 ********** 2025-04-14 01:02:13.620211 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.620225 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.620239 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.620253 | orchestrator | 2025-04-14 01:02:13.620267 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-14 01:02:13.620281 | orchestrator | Monday 14 April 2025 01:00:04 +0000 (0:00:00.342) 0:00:06.029 ********** 2025-04-14 01:02:13.620296 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.620310 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.620323 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.620337 | orchestrator | 2025-04-14 01:02:13.620351 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-14 01:02:13.620365 | orchestrator | Monday 14 April 2025 01:00:05 +0000 (0:00:00.323) 0:00:06.353 ********** 2025-04-14 01:02:13.620380 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.620394 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.620408 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.620422 | orchestrator | 2025-04-14 01:02:13.620436 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-14 01:02:13.620450 | orchestrator | Monday 14 April 2025 01:00:05 +0000 (0:00:00.553) 0:00:06.907 ********** 2025-04-14 01:02:13.620464 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.620478 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.620492 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.620506 | orchestrator | 2025-04-14 01:02:13.620528 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-14 01:02:13.620552 | orchestrator | Monday 14 April 2025 01:00:05 +0000 (0:00:00.284) 0:00:07.191 ********** 2025-04-14 01:02:13.620569 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-14 01:02:13.620588 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 01:02:13.620603 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 01:02:13.620617 | orchestrator | 2025-04-14 01:02:13.620635 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-14 01:02:13.620659 | orchestrator | Monday 14 April 2025 01:00:06 +0000 (0:00:00.722) 0:00:07.914 ********** 2025-04-14 01:02:13.620674 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.620688 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.620702 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.620715 | orchestrator | 2025-04-14 01:02:13.620729 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-14 01:02:13.620743 | orchestrator | Monday 14 April 2025 01:00:07 +0000 (0:00:00.441) 0:00:08.355 ********** 2025-04-14 01:02:13.620773 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-14 01:02:13.620798 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 01:02:13.620816 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 01:02:13.620830 | orchestrator | 2025-04-14 01:02:13.620844 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-14 01:02:13.620858 | orchestrator | Monday 14 April 2025 01:00:09 +0000 (0:00:02.365) 0:00:10.720 ********** 2025-04-14 01:02:13.620872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 01:02:13.620954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 01:02:13.620970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 01:02:13.620984 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.620998 | orchestrator | 2025-04-14 01:02:13.621012 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-14 01:02:13.621026 | orchestrator | Monday 14 April 2025 01:00:09 +0000 (0:00:00.441) 0:00:11.162 ********** 2025-04-14 01:02:13.621052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-14 01:02:13.621070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-14 01:02:13.621085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-14 01:02:13.621099 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.621113 | orchestrator | 2025-04-14 01:02:13.621127 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-14 01:02:13.621141 | orchestrator | Monday 14 April 2025 01:00:10 +0000 (0:00:00.750) 0:00:11.912 ********** 2025-04-14 01:02:13.621157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 01:02:13.621172 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 01:02:13.621187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 01:02:13.621201 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.621215 | orchestrator | 2025-04-14 01:02:13.621229 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-14 01:02:13.621243 | orchestrator | Monday 14 April 2025 01:00:10 +0000 (0:00:00.163) 0:00:12.075 ********** 2025-04-14 01:02:13.621260 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '94671843efde', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-14 01:00:07.961441', 'end': '2025-04-14 01:00:08.006066', 'delta': '0:00:00.044625', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['94671843efde'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-14 01:02:13.621289 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '53f9bc97ddf6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-14 01:00:08.572602', 'end': '2025-04-14 01:00:08.616815', 'delta': '0:00:00.044213', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['53f9bc97ddf6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-14 01:02:13.621314 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '170466f45f38', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-14 01:00:09.138048', 'end': '2025-04-14 01:00:09.175264', 'delta': '0:00:00.037216', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['170466f45f38'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-14 01:02:13.621329 | orchestrator | 2025-04-14 01:02:13.621343 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-14 01:02:13.621358 | orchestrator | Monday 14 April 2025 01:00:11 +0000 (0:00:00.231) 0:00:12.307 ********** 2025-04-14 01:02:13.621372 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.621386 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.621400 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.621414 | orchestrator | 2025-04-14 01:02:13.621429 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-14 01:02:13.621442 | orchestrator | Monday 14 April 2025 01:00:11 +0000 (0:00:00.472) 0:00:12.779 ********** 2025-04-14 01:02:13.621454 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-14 01:02:13.621467 | orchestrator | 2025-04-14 01:02:13.621479 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-14 01:02:13.621492 | orchestrator | Monday 14 April 2025 01:00:12 +0000 (0:00:01.352) 0:00:14.132 ********** 2025-04-14 01:02:13.621504 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.621517 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.621530 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.621543 | orchestrator | 2025-04-14 01:02:13.621555 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-14 01:02:13.621568 | orchestrator | Monday 14 April 2025 01:00:13 +0000 (0:00:00.500) 0:00:14.633 ********** 2025-04-14 01:02:13.621580 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.621592 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.621605 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.621617 | orchestrator | 2025-04-14 01:02:13.621629 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-14 01:02:13.621642 | orchestrator | Monday 14 April 2025 01:00:13 +0000 (0:00:00.458) 0:00:15.091 ********** 2025-04-14 01:02:13.621654 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.621666 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.621679 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.621691 | orchestrator | 2025-04-14 01:02:13.621704 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-14 01:02:13.621716 | orchestrator | Monday 14 April 2025 01:00:14 +0000 (0:00:00.305) 0:00:15.396 ********** 2025-04-14 01:02:13.621728 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.621741 | orchestrator | 2025-04-14 01:02:13.621753 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-14 01:02:13.621765 | orchestrator | Monday 14 April 2025 01:00:14 +0000 (0:00:00.136) 0:00:15.532 ********** 2025-04-14 01:02:13.621778 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.621790 | orchestrator | 2025-04-14 01:02:13.621803 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-14 01:02:13.621820 | orchestrator | Monday 14 April 2025 01:00:14 +0000 (0:00:00.241) 0:00:15.774 ********** 2025-04-14 01:02:13.621833 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.621858 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.621871 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.621898 | orchestrator | 2025-04-14 01:02:13.621911 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-14 01:02:13.621924 | orchestrator | Monday 14 April 2025 01:00:15 +0000 (0:00:00.524) 0:00:16.298 ********** 2025-04-14 01:02:13.621936 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.621949 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.621961 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.621973 | orchestrator | 2025-04-14 01:02:13.621985 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-14 01:02:13.621998 | orchestrator | Monday 14 April 2025 01:00:15 +0000 (0:00:00.343) 0:00:16.642 ********** 2025-04-14 01:02:13.622010 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.622075 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.622088 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.622100 | orchestrator | 2025-04-14 01:02:13.622113 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-14 01:02:13.622126 | orchestrator | Monday 14 April 2025 01:00:15 +0000 (0:00:00.372) 0:00:17.014 ********** 2025-04-14 01:02:13.622138 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.622151 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.622170 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.622184 | orchestrator | 2025-04-14 01:02:13.622196 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-14 01:02:13.622209 | orchestrator | Monday 14 April 2025 01:00:16 +0000 (0:00:00.331) 0:00:17.346 ********** 2025-04-14 01:02:13.622221 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.622234 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.622246 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.622258 | orchestrator | 2025-04-14 01:02:13.622339 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-14 01:02:13.622352 | orchestrator | Monday 14 April 2025 01:00:16 +0000 (0:00:00.579) 0:00:17.926 ********** 2025-04-14 01:02:13.622365 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.622377 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.622389 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.622402 | orchestrator | 2025-04-14 01:02:13.622415 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-14 01:02:13.622427 | orchestrator | Monday 14 April 2025 01:00:17 +0000 (0:00:00.396) 0:00:18.322 ********** 2025-04-14 01:02:13.622440 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.622460 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.622473 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.622485 | orchestrator | 2025-04-14 01:02:13.622497 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-14 01:02:13.622510 | orchestrator | Monday 14 April 2025 01:00:17 +0000 (0:00:00.338) 0:00:18.660 ********** 2025-04-14 01:02:13.622523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--010b5855--d3d9--5348--85e9--2943091c3a59-osd--block--010b5855--d3d9--5348--85e9--2943091c3a59', 'dm-uuid-LVM-TqHshLn3iYUe960yiXD5OXZHtSBtOj2m3zisZzkBLeEnn6MTuT90ygDOtuTYvAuF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47a37963--cc76--524e--bf57--deb935e0a7e9-osd--block--47a37963--cc76--524e--bf57--deb935e0a7e9', 'dm-uuid-LVM-y1ZmIyxYKhx4sUrw7xe8MGNMKsxtuS4mjC2i6a3UALT7T4YkxqXjASzL5fefG51j'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--89320cc7--f853--5314--9a76--744a2d019bd6-osd--block--89320cc7--f853--5314--9a76--744a2d019bd6', 'dm-uuid-LVM-4BRgDs484beWEfjdIb2VPkFOf4kTQqv5GhNa0iWWdIcvbW9kmd5z0tVFqIiO13G7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part1', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part14', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part15', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part16', 'scsi-SQEMU_QEMU_HARDDISK_3869222f-65df-4a19-aa83-a02710b9e82d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.622775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8cf203b--da46--5fbb--85f7--5c1db9738ebe-osd--block--a8cf203b--da46--5fbb--85f7--5c1db9738ebe', 'dm-uuid-LVM-sO5HTzVp8cMsaMSKOodkgw3AtLe66zPl0lC0GI1jXxIjQ8TMPrcKSy5BAh3PGT4t'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--010b5855--d3d9--5348--85e9--2943091c3a59-osd--block--010b5855--d3d9--5348--85e9--2943091c3a59'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mrf4dD-GL5h-E03t-CBbj-5jPv-pgYj-wsFAyU', 'scsi-0QEMU_QEMU_HARDDISK_c26cfb84-2784-4068-ac39-279abdffc82e', 'scsi-SQEMU_QEMU_HARDDISK_c26cfb84-2784-4068-ac39-279abdffc82e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.622818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--47a37963--cc76--524e--bf57--deb935e0a7e9-osd--block--47a37963--cc76--524e--bf57--deb935e0a7e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-93xZn8-IWyK-BjNI-AvCP-us34-mI91-TGygRI', 'scsi-0QEMU_QEMU_HARDDISK_938a8574-ab31-4693-953b-ad06db98cc0e', 'scsi-SQEMU_QEMU_HARDDISK_938a8574-ab31-4693-953b-ad06db98cc0e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.622870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0623da07-2b86-4b0f-8ae6-479bebb1d3d2', 'scsi-SQEMU_QEMU_HARDDISK_0623da07-2b86-4b0f-8ae6-479bebb1d3d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.622948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.622971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.622995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623096 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.623117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part1', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part14', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part15', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part16', 'scsi-SQEMU_QEMU_HARDDISK_96ad7b7c-0c39-408f-b5ea-89bdf3128e12-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--89320cc7--f853--5314--9a76--744a2d019bd6-osd--block--89320cc7--f853--5314--9a76--744a2d019bd6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wZfeZr-qMba-0Ko2-INeM-pJEo-LPfT-kSt3gu', 'scsi-0QEMU_QEMU_HARDDISK_676c1686-7068-4aa0-a437-1ca2ad657cc9', 'scsi-SQEMU_QEMU_HARDDISK_676c1686-7068-4aa0-a437-1ca2ad657cc9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a8cf203b--da46--5fbb--85f7--5c1db9738ebe-osd--block--a8cf203b--da46--5fbb--85f7--5c1db9738ebe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eR6ePA-4v99-Besb-lpQV-et9r-uPvm-AveYNI', 'scsi-0QEMU_QEMU_HARDDISK_64225693-fc38-404b-a874-78411dc3466d', 'scsi-SQEMU_QEMU_HARDDISK_64225693-fc38-404b-a874-78411dc3466d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bda45bef-0c7e-4642-a586-327a75973f57', 'scsi-SQEMU_QEMU_HARDDISK_bda45bef-0c7e-4642-a586-327a75973f57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623219 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.623238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3f558b9--064d--5710--baa4--8e41f44a2baf-osd--block--b3f558b9--064d--5710--baa4--8e41f44a2baf', 'dm-uuid-LVM-852tudrJnju0BoQciiOZqFqgyFmvtD1x0ZzSlD0QeCAtuVTFBUEwL0Xm3fd7KiAZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e3b39ff--ab1d--556f--9f1e--d127c66e789a-osd--block--1e3b39ff--ab1d--556f--9f1e--d127c66e789a', 'dm-uuid-LVM-lEfTGmpWDk3p7vqZv5369L2FJFmfdaWfdjcJ9RegfbSoJGFOkcQYhswTuJxcd04a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:13.623385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d052f429-2014-4477-b3ba-20099dd124f2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b3f558b9--064d--5710--baa4--8e41f44a2baf-osd--block--b3f558b9--064d--5710--baa4--8e41f44a2baf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VvOAqp-pSDs-CwAn-MWjt-UXVs-306O-ApYEVf', 'scsi-0QEMU_QEMU_HARDDISK_4f96d1f1-65aa-443a-b2b5-a30371495496', 'scsi-SQEMU_QEMU_HARDDISK_4f96d1f1-65aa-443a-b2b5-a30371495496'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e3b39ff--ab1d--556f--9f1e--d127c66e789a-osd--block--1e3b39ff--ab1d--556f--9f1e--d127c66e789a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CHHBLv-LfMM-0O7E-z7MO-wwRQ-sKY3-phLN6I', 'scsi-0QEMU_QEMU_HARDDISK_d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3', 'scsi-SQEMU_QEMU_HARDDISK_d8fa8ebf-4c84-4a81-a8cc-e0634aceb5f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03a3c0ae-ae5b-4103-947a-830f0553055f', 'scsi-SQEMU_QEMU_HARDDISK_03a3c0ae-ae5b-4103-947a-830f0553055f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:13.623490 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.623502 | orchestrator | 2025-04-14 01:02:13.623515 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-14 01:02:13.623528 | orchestrator | Monday 14 April 2025 01:00:18 +0000 (0:00:00.664) 0:00:19.325 ********** 2025-04-14 01:02:13.623540 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-04-14 01:02:13.623553 | orchestrator | 2025-04-14 01:02:13.623565 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-14 01:02:13.623578 | orchestrator | Monday 14 April 2025 01:00:19 +0000 (0:00:01.493) 0:00:20.818 ********** 2025-04-14 01:02:13.623590 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.623602 | orchestrator | 2025-04-14 01:02:13.623615 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-14 01:02:13.623627 | orchestrator | Monday 14 April 2025 01:00:19 +0000 (0:00:00.188) 0:00:21.007 ********** 2025-04-14 01:02:13.623639 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.623652 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.623822 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.623836 | orchestrator | 2025-04-14 01:02:13.623849 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-14 01:02:13.623862 | orchestrator | Monday 14 April 2025 01:00:20 +0000 (0:00:00.387) 0:00:21.395 ********** 2025-04-14 01:02:13.623874 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.623919 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.623939 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.623952 | orchestrator | 2025-04-14 01:02:13.623964 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-14 01:02:13.623977 | orchestrator | Monday 14 April 2025 01:00:20 +0000 (0:00:00.692) 0:00:22.088 ********** 2025-04-14 01:02:13.623989 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.624001 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.624014 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.624026 | orchestrator | 2025-04-14 01:02:13.624038 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-14 01:02:13.624050 | orchestrator | Monday 14 April 2025 01:00:21 +0000 (0:00:00.291) 0:00:22.379 ********** 2025-04-14 01:02:13.624063 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.624075 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.624087 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.624099 | orchestrator | 2025-04-14 01:02:13.624112 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-14 01:02:13.624124 | orchestrator | Monday 14 April 2025 01:00:21 +0000 (0:00:00.864) 0:00:23.244 ********** 2025-04-14 01:02:13.624136 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.624149 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.624161 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.624173 | orchestrator | 2025-04-14 01:02:13.624185 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-14 01:02:13.624197 | orchestrator | Monday 14 April 2025 01:00:22 +0000 (0:00:00.327) 0:00:23.572 ********** 2025-04-14 01:02:13.624210 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.624222 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.624236 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.624257 | orchestrator | 2025-04-14 01:02:13.624275 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-14 01:02:13.624288 | orchestrator | Monday 14 April 2025 01:00:22 +0000 (0:00:00.443) 0:00:24.016 ********** 2025-04-14 01:02:13.624300 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.624312 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.624324 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.624337 | orchestrator | 2025-04-14 01:02:13.624350 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-14 01:02:13.624370 | orchestrator | Monday 14 April 2025 01:00:23 +0000 (0:00:00.330) 0:00:24.346 ********** 2025-04-14 01:02:13.624382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 01:02:13.624395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 01:02:13.624408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 01:02:13.624420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 01:02:13.624433 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.624450 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 01:02:13.624463 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 01:02:13.624475 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 01:02:13.624487 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.624500 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 01:02:13.624512 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 01:02:13.624524 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.624537 | orchestrator | 2025-04-14 01:02:13.624549 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-14 01:02:13.624569 | orchestrator | Monday 14 April 2025 01:00:24 +0000 (0:00:00.921) 0:00:25.268 ********** 2025-04-14 01:02:13.624586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 01:02:13.624608 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 01:02:13.624621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 01:02:13.624633 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 01:02:13.624646 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 01:02:13.624658 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 01:02:13.624670 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.624683 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 01:02:13.624695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 01:02:13.624707 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.624720 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 01:02:13.624732 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.624744 | orchestrator | 2025-04-14 01:02:13.624757 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-14 01:02:13.624769 | orchestrator | Monday 14 April 2025 01:00:24 +0000 (0:00:00.668) 0:00:25.936 ********** 2025-04-14 01:02:13.624781 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-04-14 01:02:13.624794 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-04-14 01:02:13.624806 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-04-14 01:02:13.624818 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-04-14 01:02:13.624831 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-04-14 01:02:13.624843 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-04-14 01:02:13.624855 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-04-14 01:02:13.624868 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-04-14 01:02:13.624906 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-04-14 01:02:13.624920 | orchestrator | 2025-04-14 01:02:13.624932 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-14 01:02:13.624944 | orchestrator | Monday 14 April 2025 01:00:26 +0000 (0:00:02.231) 0:00:28.168 ********** 2025-04-14 01:02:13.624957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 01:02:13.624969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 01:02:13.624981 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 01:02:13.624994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 01:02:13.625013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 01:02:13.625025 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 01:02:13.625037 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.625050 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.625062 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 01:02:13.625075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 01:02:13.625087 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 01:02:13.625100 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.625112 | orchestrator | 2025-04-14 01:02:13.625124 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-14 01:02:13.625137 | orchestrator | Monday 14 April 2025 01:00:27 +0000 (0:00:00.689) 0:00:28.858 ********** 2025-04-14 01:02:13.625149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-04-14 01:02:13.625162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-04-14 01:02:13.625174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-04-14 01:02:13.625187 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-04-14 01:02:13.625199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-04-14 01:02:13.625211 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-04-14 01:02:13.625223 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.625236 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.625248 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-04-14 01:02:13.625261 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-04-14 01:02:13.625273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-04-14 01:02:13.625285 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.625298 | orchestrator | 2025-04-14 01:02:13.625310 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-14 01:02:13.625322 | orchestrator | Monday 14 April 2025 01:00:28 +0000 (0:00:00.482) 0:00:29.340 ********** 2025-04-14 01:02:13.625335 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-14 01:02:13.625352 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 01:02:13.625365 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 01:02:13.625378 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-14 01:02:13.625390 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 01:02:13.625403 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.625415 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 01:02:13.625427 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.625440 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-04-14 01:02:13.625458 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 01:02:13.625471 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 01:02:13.625484 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.625496 | orchestrator | 2025-04-14 01:02:13.625509 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-14 01:02:13.625521 | orchestrator | Monday 14 April 2025 01:00:28 +0000 (0:00:00.409) 0:00:29.750 ********** 2025-04-14 01:02:13.625534 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:02:13.625546 | orchestrator | 2025-04-14 01:02:13.625559 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-04-14 01:02:13.625577 | orchestrator | Monday 14 April 2025 01:00:29 +0000 (0:00:00.769) 0:00:30.519 ********** 2025-04-14 01:02:13.625590 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.625602 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.625614 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.625627 | orchestrator | 2025-04-14 01:02:13.625639 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-04-14 01:02:13.625651 | orchestrator | Monday 14 April 2025 01:00:29 +0000 (0:00:00.320) 0:00:30.840 ********** 2025-04-14 01:02:13.625664 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.625676 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.625688 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.625701 | orchestrator | 2025-04-14 01:02:13.625713 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-04-14 01:02:13.625726 | orchestrator | Monday 14 April 2025 01:00:29 +0000 (0:00:00.332) 0:00:31.173 ********** 2025-04-14 01:02:13.625738 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.625750 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.625762 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.625779 | orchestrator | 2025-04-14 01:02:13.625792 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-04-14 01:02:13.625804 | orchestrator | Monday 14 April 2025 01:00:30 +0000 (0:00:00.302) 0:00:31.475 ********** 2025-04-14 01:02:13.625816 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.625829 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.625841 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.625853 | orchestrator | 2025-04-14 01:02:13.625866 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-04-14 01:02:13.625896 | orchestrator | Monday 14 April 2025 01:00:30 +0000 (0:00:00.715) 0:00:32.190 ********** 2025-04-14 01:02:13.625910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 01:02:13.625923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 01:02:13.625935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 01:02:13.625947 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.625960 | orchestrator | 2025-04-14 01:02:13.625972 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-04-14 01:02:13.625984 | orchestrator | Monday 14 April 2025 01:00:31 +0000 (0:00:00.413) 0:00:32.604 ********** 2025-04-14 01:02:13.625997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 01:02:13.626009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 01:02:13.626061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 01:02:13.626074 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.626087 | orchestrator | 2025-04-14 01:02:13.626100 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-04-14 01:02:13.626112 | orchestrator | Monday 14 April 2025 01:00:31 +0000 (0:00:00.402) 0:00:33.006 ********** 2025-04-14 01:02:13.626125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 01:02:13.626137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 01:02:13.626149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 01:02:13.626161 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.626174 | orchestrator | 2025-04-14 01:02:13.626186 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 01:02:13.626199 | orchestrator | Monday 14 April 2025 01:00:32 +0000 (0:00:00.409) 0:00:33.416 ********** 2025-04-14 01:02:13.626211 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:02:13.626224 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:02:13.626236 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:02:13.626249 | orchestrator | 2025-04-14 01:02:13.626261 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-04-14 01:02:13.626285 | orchestrator | Monday 14 April 2025 01:00:32 +0000 (0:00:00.336) 0:00:33.753 ********** 2025-04-14 01:02:13.626298 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-04-14 01:02:13.626310 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-04-14 01:02:13.626322 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-04-14 01:02:13.626335 | orchestrator | 2025-04-14 01:02:13.626347 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-04-14 01:02:13.626360 | orchestrator | Monday 14 April 2025 01:00:33 +0000 (0:00:00.941) 0:00:34.694 ********** 2025-04-14 01:02:13.626372 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.626384 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.626397 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.626409 | orchestrator | 2025-04-14 01:02:13.626421 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-04-14 01:02:13.626434 | orchestrator | Monday 14 April 2025 01:00:33 +0000 (0:00:00.550) 0:00:35.245 ********** 2025-04-14 01:02:13.626446 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.626458 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.626470 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.626483 | orchestrator | 2025-04-14 01:02:13.626495 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-04-14 01:02:13.626514 | orchestrator | Monday 14 April 2025 01:00:34 +0000 (0:00:00.363) 0:00:35.609 ********** 2025-04-14 01:02:13.626527 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-04-14 01:02:13.626542 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.626562 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-04-14 01:02:13.626583 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.626603 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-04-14 01:02:13.626623 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.626643 | orchestrator | 2025-04-14 01:02:13.626663 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-04-14 01:02:13.626684 | orchestrator | Monday 14 April 2025 01:00:34 +0000 (0:00:00.495) 0:00:36.105 ********** 2025-04-14 01:02:13.626704 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-04-14 01:02:13.626725 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.626744 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-04-14 01:02:13.626757 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.626769 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-04-14 01:02:13.626782 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.626794 | orchestrator | 2025-04-14 01:02:13.626806 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-04-14 01:02:13.626818 | orchestrator | Monday 14 April 2025 01:00:35 +0000 (0:00:00.346) 0:00:36.451 ********** 2025-04-14 01:02:13.626831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-04-14 01:02:13.626843 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-04-14 01:02:13.626856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-04-14 01:02:13.626868 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-04-14 01:02:13.626907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-04-14 01:02:13.626930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-04-14 01:02:13.626952 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.626973 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-04-14 01:02:13.626993 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-04-14 01:02:13.627014 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.627036 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-04-14 01:02:13.627069 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.627084 | orchestrator | 2025-04-14 01:02:13.627096 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-14 01:02:13.627109 | orchestrator | Monday 14 April 2025 01:00:36 +0000 (0:00:01.159) 0:00:37.611 ********** 2025-04-14 01:02:13.627121 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.627133 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.627145 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:02:13.627158 | orchestrator | 2025-04-14 01:02:13.627170 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-14 01:02:13.627183 | orchestrator | Monday 14 April 2025 01:00:36 +0000 (0:00:00.323) 0:00:37.934 ********** 2025-04-14 01:02:13.627195 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-14 01:02:13.627207 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 01:02:13.627219 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 01:02:13.627232 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-14 01:02:13.627244 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-14 01:02:13.627256 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-14 01:02:13.627268 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-14 01:02:13.627281 | orchestrator | 2025-04-14 01:02:13.627293 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-14 01:02:13.627305 | orchestrator | Monday 14 April 2025 01:00:37 +0000 (0:00:01.065) 0:00:39.000 ********** 2025-04-14 01:02:13.627318 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-04-14 01:02:13.627330 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 01:02:13.627342 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 01:02:13.627354 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-04-14 01:02:13.627367 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-14 01:02:13.627379 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-14 01:02:13.627392 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-14 01:02:13.627404 | orchestrator | 2025-04-14 01:02:13.627416 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-04-14 01:02:13.627428 | orchestrator | Monday 14 April 2025 01:00:39 +0000 (0:00:01.856) 0:00:40.856 ********** 2025-04-14 01:02:13.627441 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:02:13.627453 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:02:13.627466 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-04-14 01:02:13.627478 | orchestrator | 2025-04-14 01:02:13.627491 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-04-14 01:02:13.627517 | orchestrator | Monday 14 April 2025 01:00:40 +0000 (0:00:00.549) 0:00:41.406 ********** 2025-04-14 01:02:13.627532 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-14 01:02:13.627547 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-14 01:02:13.627560 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-14 01:02:13.627579 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-14 01:02:13.627592 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-04-14 01:02:13.627605 | orchestrator | 2025-04-14 01:02:13.627617 | orchestrator | TASK [generate keys] *********************************************************** 2025-04-14 01:02:13.627630 | orchestrator | Monday 14 April 2025 01:01:20 +0000 (0:00:40.561) 0:01:21.967 ********** 2025-04-14 01:02:13.627642 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627654 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627667 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627679 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627691 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627716 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-04-14 01:02:13.627729 | orchestrator | 2025-04-14 01:02:13.627741 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-04-14 01:02:13.627754 | orchestrator | Monday 14 April 2025 01:01:41 +0000 (0:00:21.092) 0:01:43.060 ********** 2025-04-14 01:02:13.627766 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627790 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627803 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627815 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627827 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627840 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-04-14 01:02:13.627852 | orchestrator | 2025-04-14 01:02:13.627864 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-04-14 01:02:13.627930 | orchestrator | Monday 14 April 2025 01:01:51 +0000 (0:00:10.047) 0:01:53.107 ********** 2025-04-14 01:02:13.627946 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627958 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-14 01:02:13.627971 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-14 01:02:13.627983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.627996 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-14 01:02:13.628006 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-14 01:02:13.628016 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.628026 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-14 01:02:13.628042 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-14 01:02:13.628052 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:13.628062 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-14 01:02:13.628077 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-14 01:02:16.666101 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:16.666235 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-14 01:02:16.666255 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-14 01:02:16.666269 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-04-14 01:02:16.666282 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-04-14 01:02:16.666295 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-04-14 01:02:16.666309 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-04-14 01:02:16.666323 | orchestrator | 2025-04-14 01:02:16.666337 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:02:16.666353 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-04-14 01:02:16.666367 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-04-14 01:02:16.666382 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-04-14 01:02:16.666395 | orchestrator | 2025-04-14 01:02:16.666409 | orchestrator | 2025-04-14 01:02:16.666422 | orchestrator | 2025-04-14 01:02:16.666436 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:02:16.666449 | orchestrator | Monday 14 April 2025 01:02:10 +0000 (0:00:18.667) 0:02:11.774 ********** 2025-04-14 01:02:16.666462 | orchestrator | =============================================================================== 2025-04-14 01:02:16.666476 | orchestrator | create openstack pool(s) ----------------------------------------------- 40.56s 2025-04-14 01:02:16.666489 | orchestrator | generate keys ---------------------------------------------------------- 21.09s 2025-04-14 01:02:16.666503 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.67s 2025-04-14 01:02:16.666516 | orchestrator | get keys from monitors ------------------------------------------------- 10.05s 2025-04-14 01:02:16.666530 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.37s 2025-04-14 01:02:16.666543 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.23s 2025-04-14 01:02:16.666556 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.86s 2025-04-14 01:02:16.666570 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.49s 2025-04-14 01:02:16.666583 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.35s 2025-04-14 01:02:16.666596 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 1.16s 2025-04-14 01:02:16.666610 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.07s 2025-04-14 01:02:16.666623 | orchestrator | ceph-facts : set_fact rgw_instances without rgw multisite --------------- 0.94s 2025-04-14 01:02:16.666636 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.92s 2025-04-14 01:02:16.666650 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.89s 2025-04-14 01:02:16.666663 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.86s 2025-04-14 01:02:16.666676 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.85s 2025-04-14 01:02:16.666709 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.77s 2025-04-14 01:02:16.666723 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.77s 2025-04-14 01:02:16.666736 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.75s 2025-04-14 01:02:16.666749 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.73s 2025-04-14 01:02:16.666763 | orchestrator | 2025-04-14 01:02:13 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:16.666777 | orchestrator | 2025-04-14 01:02:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:16.666808 | orchestrator | 2025-04-14 01:02:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:16.669651 | orchestrator | 2025-04-14 01:02:16 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:16.670404 | orchestrator | 2025-04-14 01:02:16 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:19.725108 | orchestrator | 2025-04-14 01:02:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:19.725263 | orchestrator | 2025-04-14 01:02:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:19.725709 | orchestrator | 2025-04-14 01:02:19 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:19.727221 | orchestrator | 2025-04-14 01:02:19 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:22.789279 | orchestrator | 2025-04-14 01:02:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:22.789431 | orchestrator | 2025-04-14 01:02:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:22.790138 | orchestrator | 2025-04-14 01:02:22 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:22.791540 | orchestrator | 2025-04-14 01:02:22 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:22.792954 | orchestrator | 2025-04-14 01:02:22 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:22.793066 | orchestrator | 2025-04-14 01:02:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:25.837973 | orchestrator | 2025-04-14 01:02:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:25.839765 | orchestrator | 2025-04-14 01:02:25 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:25.840656 | orchestrator | 2025-04-14 01:02:25 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:25.841747 | orchestrator | 2025-04-14 01:02:25 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:28.894092 | orchestrator | 2025-04-14 01:02:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:28.894241 | orchestrator | 2025-04-14 01:02:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:28.894904 | orchestrator | 2025-04-14 01:02:28 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:28.897119 | orchestrator | 2025-04-14 01:02:28 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:28.899743 | orchestrator | 2025-04-14 01:02:28 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:31.955330 | orchestrator | 2025-04-14 01:02:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:31.955473 | orchestrator | 2025-04-14 01:02:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:31.956309 | orchestrator | 2025-04-14 01:02:31 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:31.957835 | orchestrator | 2025-04-14 01:02:31 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:31.959166 | orchestrator | 2025-04-14 01:02:31 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:35.012229 | orchestrator | 2025-04-14 01:02:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:35.012373 | orchestrator | 2025-04-14 01:02:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:35.014828 | orchestrator | 2025-04-14 01:02:35 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:35.017504 | orchestrator | 2025-04-14 01:02:35 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:35.018685 | orchestrator | 2025-04-14 01:02:35 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:38.060325 | orchestrator | 2025-04-14 01:02:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:38.060462 | orchestrator | 2025-04-14 01:02:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:38.061426 | orchestrator | 2025-04-14 01:02:38 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:38.062967 | orchestrator | 2025-04-14 01:02:38 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:38.064274 | orchestrator | 2025-04-14 01:02:38 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:41.120480 | orchestrator | 2025-04-14 01:02:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:41.120629 | orchestrator | 2025-04-14 01:02:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:41.122092 | orchestrator | 2025-04-14 01:02:41 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:41.122132 | orchestrator | 2025-04-14 01:02:41 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:41.125177 | orchestrator | 2025-04-14 01:02:41 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:44.182391 | orchestrator | 2025-04-14 01:02:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:44.182552 | orchestrator | 2025-04-14 01:02:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:44.184232 | orchestrator | 2025-04-14 01:02:44 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:44.187447 | orchestrator | 2025-04-14 01:02:44 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:44.191058 | orchestrator | 2025-04-14 01:02:44 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:44.191790 | orchestrator | 2025-04-14 01:02:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:47.239702 | orchestrator | 2025-04-14 01:02:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:47.240452 | orchestrator | 2025-04-14 01:02:47 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:47.241230 | orchestrator | 2025-04-14 01:02:47 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:47.242171 | orchestrator | 2025-04-14 01:02:47 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:47.242317 | orchestrator | 2025-04-14 01:02:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:50.290271 | orchestrator | 2025-04-14 01:02:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:50.291773 | orchestrator | 2025-04-14 01:02:50 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:50.292766 | orchestrator | 2025-04-14 01:02:50 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state STARTED 2025-04-14 01:02:50.293901 | orchestrator | 2025-04-14 01:02:50 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:53.341325 | orchestrator | 2025-04-14 01:02:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:53.341465 | orchestrator | 2025-04-14 01:02:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:53.343502 | orchestrator | 2025-04-14 01:02:53 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:53.347443 | orchestrator | 2025-04-14 01:02:53.347490 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-14 01:02:53.347524 | orchestrator | 2025-04-14 01:02:53.347539 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-04-14 01:02:53.347553 | orchestrator | 2025-04-14 01:02:53.347568 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-04-14 01:02:53.347587 | orchestrator | Monday 14 April 2025 01:02:23 +0000 (0:00:00.468) 0:00:00.468 ********** 2025-04-14 01:02:53.347602 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-04-14 01:02:53.347618 | orchestrator | 2025-04-14 01:02:53.347632 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-04-14 01:02:53.347646 | orchestrator | Monday 14 April 2025 01:02:24 +0000 (0:00:00.208) 0:00:00.677 ********** 2025-04-14 01:02:53.347660 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 01:02:53.347674 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-04-14 01:02:53.347689 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-04-14 01:02:53.347702 | orchestrator | 2025-04-14 01:02:53.347717 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-04-14 01:02:53.347730 | orchestrator | Monday 14 April 2025 01:02:25 +0000 (0:00:00.863) 0:00:01.540 ********** 2025-04-14 01:02:53.347744 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-04-14 01:02:53.347758 | orchestrator | 2025-04-14 01:02:53.347772 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-04-14 01:02:53.347786 | orchestrator | Monday 14 April 2025 01:02:25 +0000 (0:00:00.226) 0:00:01.767 ********** 2025-04-14 01:02:53.347800 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.347815 | orchestrator | 2025-04-14 01:02:53.347829 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-04-14 01:02:53.347844 | orchestrator | Monday 14 April 2025 01:02:25 +0000 (0:00:00.595) 0:00:02.363 ********** 2025-04-14 01:02:53.347911 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.347927 | orchestrator | 2025-04-14 01:02:53.347940 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-04-14 01:02:53.347954 | orchestrator | Monday 14 April 2025 01:02:25 +0000 (0:00:00.146) 0:00:02.510 ********** 2025-04-14 01:02:53.347968 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.347982 | orchestrator | 2025-04-14 01:02:53.347996 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-04-14 01:02:53.348010 | orchestrator | Monday 14 April 2025 01:02:26 +0000 (0:00:00.426) 0:00:02.936 ********** 2025-04-14 01:02:53.348025 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.348041 | orchestrator | 2025-04-14 01:02:53.348057 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-04-14 01:02:53.348097 | orchestrator | Monday 14 April 2025 01:02:26 +0000 (0:00:00.146) 0:00:03.083 ********** 2025-04-14 01:02:53.348114 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.348130 | orchestrator | 2025-04-14 01:02:53.348145 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-04-14 01:02:53.348176 | orchestrator | Monday 14 April 2025 01:02:26 +0000 (0:00:00.125) 0:00:03.209 ********** 2025-04-14 01:02:53.348193 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.348209 | orchestrator | 2025-04-14 01:02:53.348226 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-04-14 01:02:53.348244 | orchestrator | Monday 14 April 2025 01:02:26 +0000 (0:00:00.150) 0:00:03.359 ********** 2025-04-14 01:02:53.348261 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.348279 | orchestrator | 2025-04-14 01:02:53.348295 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-04-14 01:02:53.348312 | orchestrator | Monday 14 April 2025 01:02:26 +0000 (0:00:00.147) 0:00:03.507 ********** 2025-04-14 01:02:53.348328 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.348343 | orchestrator | 2025-04-14 01:02:53.348358 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-04-14 01:02:53.348373 | orchestrator | Monday 14 April 2025 01:02:27 +0000 (0:00:00.319) 0:00:03.826 ********** 2025-04-14 01:02:53.348388 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 01:02:53.348404 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 01:02:53.348420 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 01:02:53.348436 | orchestrator | 2025-04-14 01:02:53.348451 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-04-14 01:02:53.348466 | orchestrator | Monday 14 April 2025 01:02:27 +0000 (0:00:00.681) 0:00:04.508 ********** 2025-04-14 01:02:53.348481 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.348497 | orchestrator | 2025-04-14 01:02:53.348512 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-04-14 01:02:53.348527 | orchestrator | Monday 14 April 2025 01:02:28 +0000 (0:00:00.236) 0:00:04.745 ********** 2025-04-14 01:02:53.348543 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 01:02:53.348558 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 01:02:53.348578 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 01:02:53.348594 | orchestrator | 2025-04-14 01:02:53.348610 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-04-14 01:02:53.348625 | orchestrator | Monday 14 April 2025 01:02:30 +0000 (0:00:01.806) 0:00:06.551 ********** 2025-04-14 01:02:53.348640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 01:02:53.348656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 01:02:53.348671 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 01:02:53.348686 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.348702 | orchestrator | 2025-04-14 01:02:53.348717 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-04-14 01:02:53.348745 | orchestrator | Monday 14 April 2025 01:02:30 +0000 (0:00:00.443) 0:00:06.994 ********** 2025-04-14 01:02:53.348766 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-04-14 01:02:53.348786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-04-14 01:02:53.348802 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-04-14 01:02:53.348825 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.348841 | orchestrator | 2025-04-14 01:02:53.348880 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-04-14 01:02:53.348896 | orchestrator | Monday 14 April 2025 01:02:31 +0000 (0:00:00.824) 0:00:07.819 ********** 2025-04-14 01:02:53.348912 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 01:02:53.348928 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 01:02:53.348942 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-04-14 01:02:53.348991 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349006 | orchestrator | 2025-04-14 01:02:53.349020 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-04-14 01:02:53.349034 | orchestrator | Monday 14 April 2025 01:02:31 +0000 (0:00:00.180) 0:00:08.000 ********** 2025-04-14 01:02:53.349077 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '94671843efde', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-04-14 01:02:28.826470', 'end': '2025-04-14 01:02:28.863047', 'delta': '0:00:00.036577', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['94671843efde'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-04-14 01:02:53.349097 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '53f9bc97ddf6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-04-14 01:02:29.325677', 'end': '2025-04-14 01:02:29.365307', 'delta': '0:00:00.039630', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['53f9bc97ddf6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-04-14 01:02:53.349122 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '170466f45f38', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-04-14 01:02:29.854227', 'end': '2025-04-14 01:02:29.892973', 'delta': '0:00:00.038746', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['170466f45f38'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-04-14 01:02:53.349145 | orchestrator | 2025-04-14 01:02:53.349159 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-04-14 01:02:53.349173 | orchestrator | Monday 14 April 2025 01:02:31 +0000 (0:00:00.227) 0:00:08.228 ********** 2025-04-14 01:02:53.349187 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.349202 | orchestrator | 2025-04-14 01:02:53.349228 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-04-14 01:02:53.349243 | orchestrator | Monday 14 April 2025 01:02:31 +0000 (0:00:00.271) 0:00:08.500 ********** 2025-04-14 01:02:53.349257 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-04-14 01:02:53.349271 | orchestrator | 2025-04-14 01:02:53.349285 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-04-14 01:02:53.349299 | orchestrator | Monday 14 April 2025 01:02:34 +0000 (0:00:02.505) 0:00:11.005 ********** 2025-04-14 01:02:53.349313 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349327 | orchestrator | 2025-04-14 01:02:53.349341 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-04-14 01:02:53.349355 | orchestrator | Monday 14 April 2025 01:02:34 +0000 (0:00:00.136) 0:00:11.141 ********** 2025-04-14 01:02:53.349369 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349383 | orchestrator | 2025-04-14 01:02:53.349397 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-14 01:02:53.349411 | orchestrator | Monday 14 April 2025 01:02:34 +0000 (0:00:00.232) 0:00:11.374 ********** 2025-04-14 01:02:53.349425 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349439 | orchestrator | 2025-04-14 01:02:53.349453 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-04-14 01:02:53.349467 | orchestrator | Monday 14 April 2025 01:02:34 +0000 (0:00:00.120) 0:00:11.495 ********** 2025-04-14 01:02:53.349481 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.349495 | orchestrator | 2025-04-14 01:02:53.349509 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-04-14 01:02:53.349522 | orchestrator | Monday 14 April 2025 01:02:35 +0000 (0:00:00.132) 0:00:11.627 ********** 2025-04-14 01:02:53.349536 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349551 | orchestrator | 2025-04-14 01:02:53.349564 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-04-14 01:02:53.349578 | orchestrator | Monday 14 April 2025 01:02:35 +0000 (0:00:00.202) 0:00:11.830 ********** 2025-04-14 01:02:53.349592 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349606 | orchestrator | 2025-04-14 01:02:53.349620 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-04-14 01:02:53.349635 | orchestrator | Monday 14 April 2025 01:02:35 +0000 (0:00:00.127) 0:00:11.958 ********** 2025-04-14 01:02:53.349649 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349671 | orchestrator | 2025-04-14 01:02:53.349686 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-04-14 01:02:53.349700 | orchestrator | Monday 14 April 2025 01:02:35 +0000 (0:00:00.127) 0:00:12.085 ********** 2025-04-14 01:02:53.349715 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349730 | orchestrator | 2025-04-14 01:02:53.349744 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-04-14 01:02:53.349772 | orchestrator | Monday 14 April 2025 01:02:35 +0000 (0:00:00.121) 0:00:12.207 ********** 2025-04-14 01:02:53.349787 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349802 | orchestrator | 2025-04-14 01:02:53.349816 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-04-14 01:02:53.349835 | orchestrator | Monday 14 April 2025 01:02:35 +0000 (0:00:00.293) 0:00:12.500 ********** 2025-04-14 01:02:53.349884 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349911 | orchestrator | 2025-04-14 01:02:53.349934 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-04-14 01:02:53.349949 | orchestrator | Monday 14 April 2025 01:02:36 +0000 (0:00:00.137) 0:00:12.637 ********** 2025-04-14 01:02:53.349963 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.349977 | orchestrator | 2025-04-14 01:02:53.349991 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-04-14 01:02:53.350005 | orchestrator | Monday 14 April 2025 01:02:36 +0000 (0:00:00.168) 0:00:12.805 ********** 2025-04-14 01:02:53.350064 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.350081 | orchestrator | 2025-04-14 01:02:53.350095 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-04-14 01:02:53.350109 | orchestrator | Monday 14 April 2025 01:02:36 +0000 (0:00:00.132) 0:00:12.938 ********** 2025-04-14 01:02:53.350124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:53.350148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:53.350227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:53.350246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:53.350267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:53.350282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:53.350296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:53.350320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-04-14 01:02:53.350347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part1', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part14', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part15', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part16', 'scsi-SQEMU_QEMU_HARDDISK_cc2b2766-94e1-4878-a1a5-413ffcf6433c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:53.350366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdb', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_318a826d-e453-41a1-9cbe-aee990c4d38b', 'scsi-SQEMU_QEMU_HARDDISK_318a826d-e453-41a1-9cbe-aee990c4d38b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:53.350382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdc', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1d452a86-d7ed-4b7e-a6e2-8adfa0173156', 'scsi-SQEMU_QEMU_HARDDISK_1d452a86-d7ed-4b7e-a6e2-8adfa0173156'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:53.350397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61d8c1b1-8af8-4257-810b-e0715f81f0ca', 'scsi-SQEMU_QEMU_HARDDISK_61d8c1b1-8af8-4257-810b-e0715f81f0ca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:53.350418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-04-14-00-02-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-04-14 01:02:53.350434 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.350448 | orchestrator | 2025-04-14 01:02:53.350463 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-04-14 01:02:53.350477 | orchestrator | Monday 14 April 2025 01:02:36 +0000 (0:00:00.316) 0:00:13.254 ********** 2025-04-14 01:02:53.350491 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.350505 | orchestrator | 2025-04-14 01:02:53.350519 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-04-14 01:02:53.350533 | orchestrator | Monday 14 April 2025 01:02:36 +0000 (0:00:00.255) 0:00:13.510 ********** 2025-04-14 01:02:53.350547 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.350561 | orchestrator | 2025-04-14 01:02:53.350575 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-04-14 01:02:53.350589 | orchestrator | Monday 14 April 2025 01:02:37 +0000 (0:00:00.136) 0:00:13.646 ********** 2025-04-14 01:02:53.350603 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.350617 | orchestrator | 2025-04-14 01:02:53.350630 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-04-14 01:02:53.350644 | orchestrator | Monday 14 April 2025 01:02:37 +0000 (0:00:00.144) 0:00:13.791 ********** 2025-04-14 01:02:53.350664 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.350678 | orchestrator | 2025-04-14 01:02:53.350693 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-04-14 01:02:53.350707 | orchestrator | Monday 14 April 2025 01:02:37 +0000 (0:00:00.450) 0:00:14.241 ********** 2025-04-14 01:02:53.350721 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.350735 | orchestrator | 2025-04-14 01:02:53.350749 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-14 01:02:53.350763 | orchestrator | Monday 14 April 2025 01:02:37 +0000 (0:00:00.137) 0:00:14.378 ********** 2025-04-14 01:02:53.350777 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.350791 | orchestrator | 2025-04-14 01:02:53.350805 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-14 01:02:53.350819 | orchestrator | Monday 14 April 2025 01:02:38 +0000 (0:00:00.460) 0:00:14.838 ********** 2025-04-14 01:02:53.350833 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.350894 | orchestrator | 2025-04-14 01:02:53.350912 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-04-14 01:02:53.350926 | orchestrator | Monday 14 April 2025 01:02:38 +0000 (0:00:00.343) 0:00:15.182 ********** 2025-04-14 01:02:53.350940 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.350954 | orchestrator | 2025-04-14 01:02:53.350968 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-04-14 01:02:53.350982 | orchestrator | Monday 14 April 2025 01:02:38 +0000 (0:00:00.254) 0:00:15.437 ********** 2025-04-14 01:02:53.350996 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.351010 | orchestrator | 2025-04-14 01:02:53.351024 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-04-14 01:02:53.351044 | orchestrator | Monday 14 April 2025 01:02:39 +0000 (0:00:00.161) 0:00:15.599 ********** 2025-04-14 01:02:53.351058 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 01:02:53.351073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 01:02:53.351087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 01:02:53.351101 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.351115 | orchestrator | 2025-04-14 01:02:53.351129 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-04-14 01:02:53.351143 | orchestrator | Monday 14 April 2025 01:02:39 +0000 (0:00:00.482) 0:00:16.082 ********** 2025-04-14 01:02:53.351157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 01:02:53.351171 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 01:02:53.351185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 01:02:53.351198 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.351212 | orchestrator | 2025-04-14 01:02:53.351226 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-04-14 01:02:53.351240 | orchestrator | Monday 14 April 2025 01:02:40 +0000 (0:00:00.500) 0:00:16.582 ********** 2025-04-14 01:02:53.351254 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 01:02:53.351268 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-04-14 01:02:53.351282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-04-14 01:02:53.351296 | orchestrator | 2025-04-14 01:02:53.351310 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-04-14 01:02:53.351336 | orchestrator | Monday 14 April 2025 01:02:41 +0000 (0:00:01.231) 0:00:17.814 ********** 2025-04-14 01:02:53.351350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 01:02:53.351365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 01:02:53.351378 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 01:02:53.351392 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.351406 | orchestrator | 2025-04-14 01:02:53.351420 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-04-14 01:02:53.351434 | orchestrator | Monday 14 April 2025 01:02:41 +0000 (0:00:00.204) 0:00:18.018 ********** 2025-04-14 01:02:53.351448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-04-14 01:02:53.351462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-04-14 01:02:53.351475 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-04-14 01:02:53.351489 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.351503 | orchestrator | 2025-04-14 01:02:53.351517 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-04-14 01:02:53.351531 | orchestrator | Monday 14 April 2025 01:02:41 +0000 (0:00:00.213) 0:00:18.231 ********** 2025-04-14 01:02:53.351545 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-04-14 01:02:53.351559 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-04-14 01:02:53.351574 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-04-14 01:02:53.351588 | orchestrator | 2025-04-14 01:02:53.351602 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-04-14 01:02:53.351616 | orchestrator | Monday 14 April 2025 01:02:41 +0000 (0:00:00.207) 0:00:18.439 ********** 2025-04-14 01:02:53.351630 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.351644 | orchestrator | 2025-04-14 01:02:53.351658 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-04-14 01:02:53.351672 | orchestrator | Monday 14 April 2025 01:02:42 +0000 (0:00:00.348) 0:00:18.788 ********** 2025-04-14 01:02:53.351685 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:53.351699 | orchestrator | 2025-04-14 01:02:53.351719 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-04-14 01:02:53.351733 | orchestrator | Monday 14 April 2025 01:02:42 +0000 (0:00:00.136) 0:00:18.925 ********** 2025-04-14 01:02:53.351747 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 01:02:53.351767 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 01:02:53.351782 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 01:02:53.351796 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-14 01:02:53.351810 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-14 01:02:53.351824 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-14 01:02:53.351838 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-14 01:02:53.351880 | orchestrator | 2025-04-14 01:02:53.351897 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-04-14 01:02:53.351911 | orchestrator | Monday 14 April 2025 01:02:43 +0000 (0:00:00.904) 0:00:19.829 ********** 2025-04-14 01:02:53.351925 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-04-14 01:02:53.351939 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-04-14 01:02:53.351953 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-04-14 01:02:53.351966 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-04-14 01:02:53.351980 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-04-14 01:02:53.351994 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-04-14 01:02:53.352008 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-04-14 01:02:53.352022 | orchestrator | 2025-04-14 01:02:53.352036 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-04-14 01:02:53.352049 | orchestrator | Monday 14 April 2025 01:02:44 +0000 (0:00:01.560) 0:00:21.390 ********** 2025-04-14 01:02:53.352063 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:53.352078 | orchestrator | 2025-04-14 01:02:53.352091 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-04-14 01:02:53.352105 | orchestrator | Monday 14 April 2025 01:02:45 +0000 (0:00:00.436) 0:00:21.826 ********** 2025-04-14 01:02:53.352119 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:02:53.352133 | orchestrator | 2025-04-14 01:02:53.352147 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-04-14 01:02:53.352161 | orchestrator | Monday 14 April 2025 01:02:45 +0000 (0:00:00.656) 0:00:22.483 ********** 2025-04-14 01:02:53.352175 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-04-14 01:02:53.352194 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-04-14 01:02:53.352209 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-04-14 01:02:53.352223 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-04-14 01:02:53.352237 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-04-14 01:02:53.352251 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-04-14 01:02:53.352265 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-04-14 01:02:53.352278 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-04-14 01:02:53.352292 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-04-14 01:02:53.352306 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-04-14 01:02:53.352328 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-04-14 01:02:53.352342 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-04-14 01:02:53.352356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-04-14 01:02:53.352370 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-04-14 01:02:53.352384 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-04-14 01:02:53.352398 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-04-14 01:02:53.352412 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-04-14 01:02:53.352426 | orchestrator | 2025-04-14 01:02:53.352440 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:02:53.352454 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-04-14 01:02:53.352469 | orchestrator | 2025-04-14 01:02:53.352483 | orchestrator | 2025-04-14 01:02:53.352497 | orchestrator | 2025-04-14 01:02:53.352511 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:02:53.352524 | orchestrator | Monday 14 April 2025 01:02:51 +0000 (0:00:05.797) 0:00:28.280 ********** 2025-04-14 01:02:53.352538 | orchestrator | =============================================================================== 2025-04-14 01:02:53.352553 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 5.80s 2025-04-14 01:02:53.352567 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 2.51s 2025-04-14 01:02:53.352581 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.81s 2025-04-14 01:02:53.352601 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.56s 2025-04-14 01:02:56.399020 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.23s 2025-04-14 01:02:56.399136 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.90s 2025-04-14 01:02:56.399154 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.86s 2025-04-14 01:02:56.399168 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.82s 2025-04-14 01:02:56.399181 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2025-04-14 01:02:56.399194 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.66s 2025-04-14 01:02:56.399206 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.60s 2025-04-14 01:02:56.399219 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.50s 2025-04-14 01:02:56.399232 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.48s 2025-04-14 01:02:56.399263 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.46s 2025-04-14 01:02:56.399277 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.45s 2025-04-14 01:02:56.399289 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.44s 2025-04-14 01:02:56.399301 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.44s 2025-04-14 01:02:56.399314 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.43s 2025-04-14 01:02:56.399326 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.35s 2025-04-14 01:02:56.399339 | orchestrator | ceph-facts : set osd_pool_default_crush_rule fact ----------------------- 0.34s 2025-04-14 01:02:56.399352 | orchestrator | 2025-04-14 01:02:53 | INFO  | Task 727f5fdc-f49c-4cd5-a29e-51021d723fa4 is in state SUCCESS 2025-04-14 01:02:56.399366 | orchestrator | 2025-04-14 01:02:53 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:56.399399 | orchestrator | 2025-04-14 01:02:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:56.399427 | orchestrator | 2025-04-14 01:02:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:56.400997 | orchestrator | 2025-04-14 01:02:56 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state STARTED 2025-04-14 01:02:56.402913 | orchestrator | 2025-04-14 01:02:56 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state STARTED 2025-04-14 01:02:56.403368 | orchestrator | 2025-04-14 01:02:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:02:59.444574 | orchestrator | 2025-04-14 01:02:59 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:02:59.448792 | orchestrator | 2025-04-14 01:02:59 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:02:59.449644 | orchestrator | 2025-04-14 01:02:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:02:59.452562 | orchestrator | 2025-04-14 01:02:59 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:02:59.458477 | orchestrator | 2025-04-14 01:02:59 | INFO  | Task 807185dd-d98d-455a-a706-864389644103 is in state SUCCESS 2025-04-14 01:02:59.460027 | orchestrator | 2025-04-14 01:02:59.460068 | orchestrator | 2025-04-14 01:02:59.460084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:02:59.460099 | orchestrator | 2025-04-14 01:02:59.460114 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:02:59.460128 | orchestrator | Monday 14 April 2025 01:00:33 +0000 (0:00:00.349) 0:00:00.349 ********** 2025-04-14 01:02:59.460142 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:59.460158 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:59.460172 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:59.460186 | orchestrator | 2025-04-14 01:02:59.460201 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:02:59.460215 | orchestrator | Monday 14 April 2025 01:00:33 +0000 (0:00:00.419) 0:00:00.769 ********** 2025-04-14 01:02:59.460229 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-14 01:02:59.460243 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-14 01:02:59.460258 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-14 01:02:59.460272 | orchestrator | 2025-04-14 01:02:59.460286 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-04-14 01:02:59.460300 | orchestrator | 2025-04-14 01:02:59.460314 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-14 01:02:59.460328 | orchestrator | Monday 14 April 2025 01:00:33 +0000 (0:00:00.311) 0:00:01.080 ********** 2025-04-14 01:02:59.460343 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:02:59.460358 | orchestrator | 2025-04-14 01:02:59.460372 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-04-14 01:02:59.460386 | orchestrator | Monday 14 April 2025 01:00:34 +0000 (0:00:00.863) 0:00:01.943 ********** 2025-04-14 01:02:59.460404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.460540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.460576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.460595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461189 | orchestrator | 2025-04-14 01:02:59.461204 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-04-14 01:02:59.461229 | orchestrator | Monday 14 April 2025 01:00:37 +0000 (0:00:02.534) 0:00:04.478 ********** 2025-04-14 01:02:59.461245 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-04-14 01:02:59.461260 | orchestrator | 2025-04-14 01:02:59.461276 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-04-14 01:02:59.461291 | orchestrator | Monday 14 April 2025 01:00:37 +0000 (0:00:00.693) 0:00:05.172 ********** 2025-04-14 01:02:59.461307 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:59.461323 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:59.461339 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:59.461354 | orchestrator | 2025-04-14 01:02:59.461370 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-04-14 01:02:59.461385 | orchestrator | Monday 14 April 2025 01:00:38 +0000 (0:00:00.450) 0:00:05.623 ********** 2025-04-14 01:02:59.461400 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:02:59.461417 | orchestrator | 2025-04-14 01:02:59.461432 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-14 01:02:59.461448 | orchestrator | Monday 14 April 2025 01:00:38 +0000 (0:00:00.495) 0:00:06.119 ********** 2025-04-14 01:02:59.461463 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:02:59.461479 | orchestrator | 2025-04-14 01:02:59.461495 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-04-14 01:02:59.461510 | orchestrator | Monday 14 April 2025 01:00:39 +0000 (0:00:00.699) 0:00:06.818 ********** 2025-04-14 01:02:59.461547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.461565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.461591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.461608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.461791 | orchestrator | 2025-04-14 01:02:59.461807 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-04-14 01:02:59.461823 | orchestrator | Monday 14 April 2025 01:00:42 +0000 (0:00:03.291) 0:00:10.110 ********** 2025-04-14 01:02:59.461870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 01:02:59.461897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.461914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 01:02:59.461930 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.461947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 01:02:59.461964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.461988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 01:02:59.462004 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.462083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 01:02:59.462104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.462121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 01:02:59.462135 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.462149 | orchestrator | 2025-04-14 01:02:59.462164 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-04-14 01:02:59.462178 | orchestrator | Monday 14 April 2025 01:00:43 +0000 (0:00:00.709) 0:00:10.819 ********** 2025-04-14 01:02:59.462193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 01:02:59.462217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.462239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 01:02:59.462253 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.462268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 01:02:59.462284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.462299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 01:02:59.462313 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.462335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-04-14 01:02:59.462358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.462373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-04-14 01:02:59.462387 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.462402 | orchestrator | 2025-04-14 01:02:59.462416 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-04-14 01:02:59.462444 | orchestrator | Monday 14 April 2025 01:00:44 +0000 (0:00:01.050) 0:00:11.869 ********** 2025-04-14 01:02:59.462460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.462476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.462504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.462521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462622 | orchestrator | 2025-04-14 01:02:59.462637 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-04-14 01:02:59.462651 | orchestrator | Monday 14 April 2025 01:00:48 +0000 (0:00:03.325) 0:00:15.195 ********** 2025-04-14 01:02:59.462666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.462682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.462697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.462712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.462740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.462756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.462771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.462821 | orchestrator | 2025-04-14 01:02:59.462835 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-04-14 01:02:59.463033 | orchestrator | Monday 14 April 2025 01:00:53 +0000 (0:00:05.526) 0:00:20.722 ********** 2025-04-14 01:02:59.463053 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.463067 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:02:59.463080 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:02:59.463094 | orchestrator | 2025-04-14 01:02:59.463109 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-04-14 01:02:59.463123 | orchestrator | Monday 14 April 2025 01:00:55 +0000 (0:00:01.915) 0:00:22.637 ********** 2025-04-14 01:02:59.463137 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.463151 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.463165 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.463178 | orchestrator | 2025-04-14 01:02:59.463200 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-04-14 01:02:59.463215 | orchestrator | Monday 14 April 2025 01:00:56 +0000 (0:00:01.116) 0:00:23.753 ********** 2025-04-14 01:02:59.463229 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.463242 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.463256 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.463270 | orchestrator | 2025-04-14 01:02:59.463285 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-04-14 01:02:59.463299 | orchestrator | Monday 14 April 2025 01:00:57 +0000 (0:00:00.467) 0:00:24.220 ********** 2025-04-14 01:02:59.463313 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.463327 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.463341 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.463355 | orchestrator | 2025-04-14 01:02:59.463369 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-04-14 01:02:59.463383 | orchestrator | Monday 14 April 2025 01:00:57 +0000 (0:00:00.431) 0:00:24.652 ********** 2025-04-14 01:02:59.463398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.463414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.463428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.463454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.463476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.463493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-04-14 01:02:59.463508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.463523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.463550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.463565 | orchestrator | 2025-04-14 01:02:59.463579 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-14 01:02:59.463593 | orchestrator | Monday 14 April 2025 01:01:00 +0000 (0:00:02.799) 0:00:27.451 ********** 2025-04-14 01:02:59.463615 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.463629 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.463650 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.463664 | orchestrator | 2025-04-14 01:02:59.463680 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-04-14 01:02:59.463696 | orchestrator | Monday 14 April 2025 01:01:00 +0000 (0:00:00.284) 0:00:27.735 ********** 2025-04-14 01:02:59.463712 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-14 01:02:59.463728 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-14 01:02:59.463755 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-04-14 01:02:59.463772 | orchestrator | 2025-04-14 01:02:59.463788 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-04-14 01:02:59.463804 | orchestrator | Monday 14 April 2025 01:01:02 +0000 (0:00:02.256) 0:00:29.992 ********** 2025-04-14 01:02:59.463820 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:02:59.463835 | orchestrator | 2025-04-14 01:02:59.463873 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-04-14 01:02:59.463889 | orchestrator | Monday 14 April 2025 01:01:03 +0000 (0:00:00.696) 0:00:30.689 ********** 2025-04-14 01:02:59.463904 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.463919 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.463934 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.463950 | orchestrator | 2025-04-14 01:02:59.463965 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-04-14 01:02:59.463981 | orchestrator | Monday 14 April 2025 01:01:04 +0000 (0:00:01.122) 0:00:31.811 ********** 2025-04-14 01:02:59.463997 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:02:59.464013 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-14 01:02:59.464028 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-14 01:02:59.464045 | orchestrator | 2025-04-14 01:02:59.464059 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-04-14 01:02:59.464072 | orchestrator | Monday 14 April 2025 01:01:05 +0000 (0:00:00.803) 0:00:32.615 ********** 2025-04-14 01:02:59.464086 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:59.464100 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:59.464114 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:59.464128 | orchestrator | 2025-04-14 01:02:59.464142 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-04-14 01:02:59.464163 | orchestrator | Monday 14 April 2025 01:01:05 +0000 (0:00:00.328) 0:00:32.943 ********** 2025-04-14 01:02:59.464177 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-14 01:02:59.464191 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-14 01:02:59.464205 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-04-14 01:02:59.464219 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-14 01:02:59.464233 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-14 01:02:59.464247 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-04-14 01:02:59.464261 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-14 01:02:59.464276 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-14 01:02:59.464290 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-04-14 01:02:59.464303 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-14 01:02:59.464317 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-14 01:02:59.464331 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-04-14 01:02:59.464345 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-14 01:02:59.464359 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-14 01:02:59.464373 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-04-14 01:02:59.464387 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-14 01:02:59.464401 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-14 01:02:59.464420 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-14 01:02:59.464434 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-14 01:02:59.464448 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-14 01:02:59.464462 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-14 01:02:59.464476 | orchestrator | 2025-04-14 01:02:59.464490 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-04-14 01:02:59.464504 | orchestrator | Monday 14 April 2025 01:01:17 +0000 (0:00:11.926) 0:00:44.870 ********** 2025-04-14 01:02:59.464518 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-14 01:02:59.464532 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-14 01:02:59.464545 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-14 01:02:59.464559 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-14 01:02:59.464573 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-14 01:02:59.464593 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-14 01:02:59.464607 | orchestrator | 2025-04-14 01:02:59.464621 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-04-14 01:02:59.464635 | orchestrator | Monday 14 April 2025 01:01:20 +0000 (0:00:03.184) 0:00:48.054 ********** 2025-04-14 01:02:59.464650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.464674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.464689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-04-14 01:02:59.464705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.464727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.464749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-04-14 01:02:59.464765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.464779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.464794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-04-14 01:02:59.464808 | orchestrator | 2025-04-14 01:02:59.464822 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-14 01:02:59.464836 | orchestrator | Monday 14 April 2025 01:01:23 +0000 (0:00:02.715) 0:00:50.770 ********** 2025-04-14 01:02:59.464906 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.464922 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.464936 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.464950 | orchestrator | 2025-04-14 01:02:59.464964 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-04-14 01:02:59.464978 | orchestrator | Monday 14 April 2025 01:01:23 +0000 (0:00:00.288) 0:00:51.058 ********** 2025-04-14 01:02:59.464992 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.465006 | orchestrator | 2025-04-14 01:02:59.465020 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-04-14 01:02:59.465032 | orchestrator | Monday 14 April 2025 01:01:26 +0000 (0:00:02.542) 0:00:53.600 ********** 2025-04-14 01:02:59.465045 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.465057 | orchestrator | 2025-04-14 01:02:59.465070 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-04-14 01:02:59.465089 | orchestrator | Monday 14 April 2025 01:01:28 +0000 (0:00:02.289) 0:00:55.890 ********** 2025-04-14 01:02:59.465101 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:59.465114 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:59.465126 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:59.465138 | orchestrator | 2025-04-14 01:02:59.465151 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-04-14 01:02:59.465164 | orchestrator | Monday 14 April 2025 01:01:29 +0000 (0:00:00.915) 0:00:56.805 ********** 2025-04-14 01:02:59.465176 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:59.465194 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:59.465207 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:59.465220 | orchestrator | 2025-04-14 01:02:59.465233 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-04-14 01:02:59.465245 | orchestrator | Monday 14 April 2025 01:01:29 +0000 (0:00:00.318) 0:00:57.124 ********** 2025-04-14 01:02:59.465258 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.465270 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:02:59.465283 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:02:59.465295 | orchestrator | 2025-04-14 01:02:59.465308 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-04-14 01:02:59.465320 | orchestrator | Monday 14 April 2025 01:01:30 +0000 (0:00:00.479) 0:00:57.603 ********** 2025-04-14 01:02:59.465333 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.465345 | orchestrator | 2025-04-14 01:02:59.465358 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-04-14 01:02:59.465370 | orchestrator | Monday 14 April 2025 01:01:43 +0000 (0:00:13.453) 0:01:11.056 ********** 2025-04-14 01:02:59.465383 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.465395 | orchestrator | 2025-04-14 01:02:59.465408 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-14 01:02:59.465420 | orchestrator | Monday 14 April 2025 01:01:52 +0000 (0:00:08.793) 0:01:19.850 ********** 2025-04-14 01:02:59.465432 | orchestrator | 2025-04-14 01:02:59.465445 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-14 01:02:59.465458 | orchestrator | Monday 14 April 2025 01:01:52 +0000 (0:00:00.063) 0:01:19.913 ********** 2025-04-14 01:02:59.465470 | orchestrator | 2025-04-14 01:02:59.465482 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-04-14 01:02:59.465500 | orchestrator | Monday 14 April 2025 01:01:52 +0000 (0:00:00.055) 0:01:19.968 ********** 2025-04-14 01:02:59.465513 | orchestrator | 2025-04-14 01:02:59.465525 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-04-14 01:02:59.465537 | orchestrator | Monday 14 April 2025 01:01:52 +0000 (0:00:00.056) 0:01:20.025 ********** 2025-04-14 01:02:59.465550 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.465562 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:02:59.465575 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:02:59.465587 | orchestrator | 2025-04-14 01:02:59.465599 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-04-14 01:02:59.465612 | orchestrator | Monday 14 April 2025 01:02:01 +0000 (0:00:08.960) 0:01:28.986 ********** 2025-04-14 01:02:59.465624 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:02:59.465636 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:02:59.465649 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.465661 | orchestrator | 2025-04-14 01:02:59.465673 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-04-14 01:02:59.465686 | orchestrator | Monday 14 April 2025 01:02:09 +0000 (0:00:07.817) 0:01:36.803 ********** 2025-04-14 01:02:59.465698 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.465710 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:02:59.465723 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:02:59.465735 | orchestrator | 2025-04-14 01:02:59.465748 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-14 01:02:59.465766 | orchestrator | Monday 14 April 2025 01:02:15 +0000 (0:00:05.766) 0:01:42.569 ********** 2025-04-14 01:02:59.465779 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:02:59.465791 | orchestrator | 2025-04-14 01:02:59.465804 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-04-14 01:02:59.465816 | orchestrator | Monday 14 April 2025 01:02:16 +0000 (0:00:00.825) 0:01:43.395 ********** 2025-04-14 01:02:59.465829 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:02:59.465841 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:02:59.465870 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:02:59.465883 | orchestrator | 2025-04-14 01:02:59.465896 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-04-14 01:02:59.465908 | orchestrator | Monday 14 April 2025 01:02:17 +0000 (0:00:01.081) 0:01:44.476 ********** 2025-04-14 01:02:59.465921 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:02:59.465933 | orchestrator | 2025-04-14 01:02:59.465945 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-04-14 01:02:59.465958 | orchestrator | Monday 14 April 2025 01:02:18 +0000 (0:00:01.553) 0:01:46.030 ********** 2025-04-14 01:02:59.465970 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-04-14 01:02:59.465982 | orchestrator | 2025-04-14 01:02:59.465995 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-04-14 01:02:59.466007 | orchestrator | Monday 14 April 2025 01:02:27 +0000 (0:00:08.925) 0:01:54.956 ********** 2025-04-14 01:02:59.466044 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-04-14 01:02:59.466059 | orchestrator | 2025-04-14 01:02:59.466072 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-04-14 01:02:59.466084 | orchestrator | Monday 14 April 2025 01:02:45 +0000 (0:00:17.401) 0:02:12.357 ********** 2025-04-14 01:02:59.466096 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-04-14 01:02:59.466109 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-04-14 01:02:59.466121 | orchestrator | 2025-04-14 01:02:59.466134 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-04-14 01:02:59.466146 | orchestrator | Monday 14 April 2025 01:02:51 +0000 (0:00:06.105) 0:02:18.463 ********** 2025-04-14 01:02:59.466159 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.466184 | orchestrator | 2025-04-14 01:02:59.466207 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-04-14 01:02:59.466220 | orchestrator | Monday 14 April 2025 01:02:51 +0000 (0:00:00.118) 0:02:18.581 ********** 2025-04-14 01:02:59.466232 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:02:59.466245 | orchestrator | 2025-04-14 01:02:59.466257 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-04-14 01:02:59.466276 | orchestrator | Monday 14 April 2025 01:02:51 +0000 (0:00:00.111) 0:02:18.692 ********** 2025-04-14 01:03:02.512948 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:03:02.513068 | orchestrator | 2025-04-14 01:03:02.513089 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-04-14 01:03:02.513105 | orchestrator | Monday 14 April 2025 01:02:51 +0000 (0:00:00.134) 0:02:18.827 ********** 2025-04-14 01:03:02.513119 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:03:02.513133 | orchestrator | 2025-04-14 01:03:02.513148 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-04-14 01:03:02.513162 | orchestrator | Monday 14 April 2025 01:02:52 +0000 (0:00:00.435) 0:02:19.262 ********** 2025-04-14 01:03:02.513176 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:03:02.513191 | orchestrator | 2025-04-14 01:03:02.513205 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-04-14 01:03:02.513219 | orchestrator | Monday 14 April 2025 01:02:55 +0000 (0:00:03.653) 0:02:22.915 ********** 2025-04-14 01:03:02.513233 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:03:02.513274 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:03:02.513288 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:03:02.513302 | orchestrator | 2025-04-14 01:03:02.513331 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:03:02.513348 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-14 01:03:02.513363 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-14 01:03:02.513378 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-14 01:03:02.513392 | orchestrator | 2025-04-14 01:03:02.513406 | orchestrator | 2025-04-14 01:03:02.513421 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:03:02.513437 | orchestrator | Monday 14 April 2025 01:02:56 +0000 (0:00:00.592) 0:02:23.508 ********** 2025-04-14 01:03:02.513453 | orchestrator | =============================================================================== 2025-04-14 01:03:02.513469 | orchestrator | service-ks-register : keystone | Creating services --------------------- 17.40s 2025-04-14 01:03:02.513486 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.45s 2025-04-14 01:03:02.513502 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 11.93s 2025-04-14 01:03:02.513518 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.96s 2025-04-14 01:03:02.513535 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 8.93s 2025-04-14 01:03:02.513550 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.79s 2025-04-14 01:03:02.513567 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.82s 2025-04-14 01:03:02.513583 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.11s 2025-04-14 01:03:02.513599 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.77s 2025-04-14 01:03:02.513614 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.53s 2025-04-14 01:03:02.513630 | orchestrator | keystone : Creating default user role ----------------------------------- 3.65s 2025-04-14 01:03:02.513646 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.33s 2025-04-14 01:03:02.513661 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.29s 2025-04-14 01:03:02.513678 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.18s 2025-04-14 01:03:02.513694 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.80s 2025-04-14 01:03:02.513711 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.72s 2025-04-14 01:03:02.513727 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.54s 2025-04-14 01:03:02.513744 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.53s 2025-04-14 01:03:02.513760 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.29s 2025-04-14 01:03:02.513776 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.26s 2025-04-14 01:03:02.513793 | orchestrator | 2025-04-14 01:02:59 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:02.513809 | orchestrator | 2025-04-14 01:02:59 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:02.513823 | orchestrator | 2025-04-14 01:02:59 | INFO  | Task 24e8400a-a706-46e7-8fbc-ef08547f7f86 is in state SUCCESS 2025-04-14 01:03:02.513837 | orchestrator | 2025-04-14 01:02:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:02.513891 | orchestrator | 2025-04-14 01:03:02 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:05.564499 | orchestrator | 2025-04-14 01:03:02 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:05.564621 | orchestrator | 2025-04-14 01:03:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:05.564640 | orchestrator | 2025-04-14 01:03:02 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:05.564674 | orchestrator | 2025-04-14 01:03:02 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:05.564688 | orchestrator | 2025-04-14 01:03:02 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:05.564701 | orchestrator | 2025-04-14 01:03:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:05.564731 | orchestrator | 2025-04-14 01:03:05 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:05.565473 | orchestrator | 2025-04-14 01:03:05 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:05.566710 | orchestrator | 2025-04-14 01:03:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:05.568220 | orchestrator | 2025-04-14 01:03:05 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:05.569692 | orchestrator | 2025-04-14 01:03:05 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:05.570870 | orchestrator | 2025-04-14 01:03:05 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:08.631029 | orchestrator | 2025-04-14 01:03:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:08.631177 | orchestrator | 2025-04-14 01:03:08 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:08.631685 | orchestrator | 2025-04-14 01:03:08 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:08.633269 | orchestrator | 2025-04-14 01:03:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:08.634600 | orchestrator | 2025-04-14 01:03:08 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:08.635984 | orchestrator | 2025-04-14 01:03:08 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:08.638135 | orchestrator | 2025-04-14 01:03:08 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:11.683190 | orchestrator | 2025-04-14 01:03:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:11.683331 | orchestrator | 2025-04-14 01:03:11 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:11.683977 | orchestrator | 2025-04-14 01:03:11 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:11.685118 | orchestrator | 2025-04-14 01:03:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:11.686327 | orchestrator | 2025-04-14 01:03:11 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:11.687065 | orchestrator | 2025-04-14 01:03:11 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:11.687873 | orchestrator | 2025-04-14 01:03:11 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:14.732160 | orchestrator | 2025-04-14 01:03:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:14.732288 | orchestrator | 2025-04-14 01:03:14 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:14.733941 | orchestrator | 2025-04-14 01:03:14 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:14.735108 | orchestrator | 2025-04-14 01:03:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:14.737923 | orchestrator | 2025-04-14 01:03:14 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:14.739783 | orchestrator | 2025-04-14 01:03:14 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:14.742189 | orchestrator | 2025-04-14 01:03:14 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:14.742307 | orchestrator | 2025-04-14 01:03:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:17.784890 | orchestrator | 2025-04-14 01:03:17 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:17.786463 | orchestrator | 2025-04-14 01:03:17 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:17.787733 | orchestrator | 2025-04-14 01:03:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:17.789644 | orchestrator | 2025-04-14 01:03:17 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:17.791508 | orchestrator | 2025-04-14 01:03:17 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:17.792955 | orchestrator | 2025-04-14 01:03:17 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:17.793096 | orchestrator | 2025-04-14 01:03:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:20.838983 | orchestrator | 2025-04-14 01:03:20 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:20.841443 | orchestrator | 2025-04-14 01:03:20 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:20.842998 | orchestrator | 2025-04-14 01:03:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:20.847393 | orchestrator | 2025-04-14 01:03:20 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:20.850357 | orchestrator | 2025-04-14 01:03:20 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:20.853203 | orchestrator | 2025-04-14 01:03:20 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:23.894656 | orchestrator | 2025-04-14 01:03:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:23.894796 | orchestrator | 2025-04-14 01:03:23 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:23.897588 | orchestrator | 2025-04-14 01:03:23 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:23.899004 | orchestrator | 2025-04-14 01:03:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:23.900892 | orchestrator | 2025-04-14 01:03:23 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:23.902676 | orchestrator | 2025-04-14 01:03:23 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:23.904663 | orchestrator | 2025-04-14 01:03:23 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:26.964482 | orchestrator | 2025-04-14 01:03:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:26.964574 | orchestrator | 2025-04-14 01:03:26 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:26.966364 | orchestrator | 2025-04-14 01:03:26 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:26.969051 | orchestrator | 2025-04-14 01:03:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:26.971084 | orchestrator | 2025-04-14 01:03:26 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:26.972922 | orchestrator | 2025-04-14 01:03:26 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:26.974789 | orchestrator | 2025-04-14 01:03:26 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:30.026782 | orchestrator | 2025-04-14 01:03:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:30.026985 | orchestrator | 2025-04-14 01:03:30 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:30.029022 | orchestrator | 2025-04-14 01:03:30 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:30.030163 | orchestrator | 2025-04-14 01:03:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:30.031596 | orchestrator | 2025-04-14 01:03:30 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:30.033526 | orchestrator | 2025-04-14 01:03:30 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:30.034819 | orchestrator | 2025-04-14 01:03:30 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:33.091248 | orchestrator | 2025-04-14 01:03:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:33.091395 | orchestrator | 2025-04-14 01:03:33 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:33.093446 | orchestrator | 2025-04-14 01:03:33 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:33.095442 | orchestrator | 2025-04-14 01:03:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:33.097127 | orchestrator | 2025-04-14 01:03:33 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:33.098731 | orchestrator | 2025-04-14 01:03:33 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:33.100288 | orchestrator | 2025-04-14 01:03:33 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:36.148211 | orchestrator | 2025-04-14 01:03:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:36.148357 | orchestrator | 2025-04-14 01:03:36 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:36.149650 | orchestrator | 2025-04-14 01:03:36 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:36.149699 | orchestrator | 2025-04-14 01:03:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:36.150927 | orchestrator | 2025-04-14 01:03:36 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:36.152032 | orchestrator | 2025-04-14 01:03:36 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:36.153519 | orchestrator | 2025-04-14 01:03:36 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:36.153842 | orchestrator | 2025-04-14 01:03:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:39.208491 | orchestrator | 2025-04-14 01:03:39 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:39.210807 | orchestrator | 2025-04-14 01:03:39 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:39.210973 | orchestrator | 2025-04-14 01:03:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:39.211011 | orchestrator | 2025-04-14 01:03:39 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:39.211439 | orchestrator | 2025-04-14 01:03:39 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:39.213289 | orchestrator | 2025-04-14 01:03:39 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:42.259426 | orchestrator | 2025-04-14 01:03:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:42.259607 | orchestrator | 2025-04-14 01:03:42 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:42.260196 | orchestrator | 2025-04-14 01:03:42 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:42.260945 | orchestrator | 2025-04-14 01:03:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:42.261897 | orchestrator | 2025-04-14 01:03:42 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:42.262941 | orchestrator | 2025-04-14 01:03:42 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:42.263963 | orchestrator | 2025-04-14 01:03:42 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:45.316433 | orchestrator | 2025-04-14 01:03:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:45.316873 | orchestrator | 2025-04-14 01:03:45 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:45.321124 | orchestrator | 2025-04-14 01:03:45 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:45.321220 | orchestrator | 2025-04-14 01:03:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:45.321254 | orchestrator | 2025-04-14 01:03:45 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:45.321944 | orchestrator | 2025-04-14 01:03:45 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:45.322630 | orchestrator | 2025-04-14 01:03:45 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:48.377641 | orchestrator | 2025-04-14 01:03:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:48.377791 | orchestrator | 2025-04-14 01:03:48 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:48.378514 | orchestrator | 2025-04-14 01:03:48 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:48.379535 | orchestrator | 2025-04-14 01:03:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:48.380626 | orchestrator | 2025-04-14 01:03:48 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:48.381489 | orchestrator | 2025-04-14 01:03:48 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:48.382831 | orchestrator | 2025-04-14 01:03:48 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:51.425876 | orchestrator | 2025-04-14 01:03:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:51.426063 | orchestrator | 2025-04-14 01:03:51 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:51.427140 | orchestrator | 2025-04-14 01:03:51 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:51.427179 | orchestrator | 2025-04-14 01:03:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:51.427711 | orchestrator | 2025-04-14 01:03:51 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:51.427741 | orchestrator | 2025-04-14 01:03:51 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:51.428420 | orchestrator | 2025-04-14 01:03:51 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:54.454210 | orchestrator | 2025-04-14 01:03:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:54.454357 | orchestrator | 2025-04-14 01:03:54 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:54.454556 | orchestrator | 2025-04-14 01:03:54 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:54.455390 | orchestrator | 2025-04-14 01:03:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:54.456241 | orchestrator | 2025-04-14 01:03:54 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:54.457168 | orchestrator | 2025-04-14 01:03:54 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state STARTED 2025-04-14 01:03:54.458061 | orchestrator | 2025-04-14 01:03:54 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:03:57.504343 | orchestrator | 2025-04-14 01:03:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:03:57.504484 | orchestrator | 2025-04-14 01:03:57 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:03:57.505994 | orchestrator | 2025-04-14 01:03:57 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:03:57.506942 | orchestrator | 2025-04-14 01:03:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:03:57.509373 | orchestrator | 2025-04-14 01:03:57 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:03:57.510511 | orchestrator | 2025-04-14 01:03:57 | INFO  | Task 756b88ca-7c2f-4e4a-8400-87858d9ce946 is in state SUCCESS 2025-04-14 01:03:57.510879 | orchestrator | 2025-04-14 01:03:57.510911 | orchestrator | 2025-04-14 01:03:57.510926 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-04-14 01:03:57.510941 | orchestrator | 2025-04-14 01:03:57.510955 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-04-14 01:03:57.510969 | orchestrator | Monday 14 April 2025 01:02:14 +0000 (0:00:00.146) 0:00:00.146 ********** 2025-04-14 01:03:57.510984 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-14 01:03:57.510999 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-14 01:03:57.511013 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-14 01:03:57.511027 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-14 01:03:57.511041 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-14 01:03:57.511072 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-14 01:03:57.511086 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-14 01:03:57.511100 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-14 01:03:57.511114 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-14 01:03:57.511128 | orchestrator | 2025-04-14 01:03:57.511166 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-04-14 01:03:57.511181 | orchestrator | Monday 14 April 2025 01:02:17 +0000 (0:00:03.200) 0:00:03.346 ********** 2025-04-14 01:03:57.511195 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-04-14 01:03:57.511209 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-14 01:03:57.511223 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-14 01:03:57.511237 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-04-14 01:03:57.511251 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-04-14 01:03:57.511265 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-04-14 01:03:57.511279 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-04-14 01:03:57.511292 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-04-14 01:03:57.511390 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-04-14 01:03:57.511407 | orchestrator | 2025-04-14 01:03:57.511421 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-04-14 01:03:57.511435 | orchestrator | Monday 14 April 2025 01:02:18 +0000 (0:00:00.235) 0:00:03.582 ********** 2025-04-14 01:03:57.511449 | orchestrator | ok: [testbed-manager] => { 2025-04-14 01:03:57.511466 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-04-14 01:03:57.511482 | orchestrator | } 2025-04-14 01:03:57.511498 | orchestrator | 2025-04-14 01:03:57.511513 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-04-14 01:03:57.511529 | orchestrator | Monday 14 April 2025 01:02:18 +0000 (0:00:00.174) 0:00:03.756 ********** 2025-04-14 01:03:57.511545 | orchestrator | changed: [testbed-manager] 2025-04-14 01:03:57.511560 | orchestrator | 2025-04-14 01:03:57.511577 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-04-14 01:03:57.511592 | orchestrator | Monday 14 April 2025 01:02:52 +0000 (0:00:34.267) 0:00:38.024 ********** 2025-04-14 01:03:57.511609 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-04-14 01:03:57.511639 | orchestrator | 2025-04-14 01:03:57.511655 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-04-14 01:03:57.511671 | orchestrator | Monday 14 April 2025 01:02:52 +0000 (0:00:00.494) 0:00:38.518 ********** 2025-04-14 01:03:57.511687 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-04-14 01:03:57.511703 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-04-14 01:03:57.511719 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-04-14 01:03:57.511735 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-04-14 01:03:57.511752 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-04-14 01:03:57.511780 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-04-14 01:03:57.513168 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-04-14 01:03:57.513217 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-04-14 01:03:57.513231 | orchestrator | 2025-04-14 01:03:57.513246 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-04-14 01:03:57.513261 | orchestrator | Monday 14 April 2025 01:02:56 +0000 (0:00:03.146) 0:00:41.664 ********** 2025-04-14 01:03:57.513275 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:03:57.513301 | orchestrator | 2025-04-14 01:03:57.513315 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:03:57.513330 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 01:03:57.513344 | orchestrator | 2025-04-14 01:03:57.513359 | orchestrator | Monday 14 April 2025 01:02:56 +0000 (0:00:00.034) 0:00:41.698 ********** 2025-04-14 01:03:57.513373 | orchestrator | =============================================================================== 2025-04-14 01:03:57.513386 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 34.27s 2025-04-14 01:03:57.513400 | orchestrator | Check ceph keys --------------------------------------------------------- 3.20s 2025-04-14 01:03:57.513414 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 3.15s 2025-04-14 01:03:57.513428 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.49s 2025-04-14 01:03:57.513447 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.24s 2025-04-14 01:03:57.513461 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.17s 2025-04-14 01:03:57.513475 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-04-14 01:03:57.513489 | orchestrator | 2025-04-14 01:03:57.513512 | orchestrator | 2025-04-14 01:03:57 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:03:57.514604 | orchestrator | 2025-04-14 01:03:57 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:00.550690 | orchestrator | 2025-04-14 01:03:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:00.550891 | orchestrator | 2025-04-14 01:04:00 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:00.551377 | orchestrator | 2025-04-14 01:04:00 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:00.551422 | orchestrator | 2025-04-14 01:04:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:00.551458 | orchestrator | 2025-04-14 01:04:00 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:00.551969 | orchestrator | 2025-04-14 01:04:00 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:00.553320 | orchestrator | 2025-04-14 01:04:00 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:03.581339 | orchestrator | 2025-04-14 01:04:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:03.581473 | orchestrator | 2025-04-14 01:04:03 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:03.584069 | orchestrator | 2025-04-14 01:04:03 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:03.586165 | orchestrator | 2025-04-14 01:04:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:03.588651 | orchestrator | 2025-04-14 01:04:03 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:03.590922 | orchestrator | 2025-04-14 01:04:03 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:03.592716 | orchestrator | 2025-04-14 01:04:03 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:06.624977 | orchestrator | 2025-04-14 01:04:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:06.625149 | orchestrator | 2025-04-14 01:04:06 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:06.627498 | orchestrator | 2025-04-14 01:04:06 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:09.651494 | orchestrator | 2025-04-14 01:04:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:09.651616 | orchestrator | 2025-04-14 01:04:06 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:09.651654 | orchestrator | 2025-04-14 01:04:06 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:09.651671 | orchestrator | 2025-04-14 01:04:06 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:09.651686 | orchestrator | 2025-04-14 01:04:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:09.651718 | orchestrator | 2025-04-14 01:04:09 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:09.652254 | orchestrator | 2025-04-14 01:04:09 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:09.652287 | orchestrator | 2025-04-14 01:04:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:09.652310 | orchestrator | 2025-04-14 01:04:09 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:09.652944 | orchestrator | 2025-04-14 01:04:09 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:09.653581 | orchestrator | 2025-04-14 01:04:09 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:12.685926 | orchestrator | 2025-04-14 01:04:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:12.686127 | orchestrator | 2025-04-14 01:04:12 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:12.688310 | orchestrator | 2025-04-14 01:04:12 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:12.688445 | orchestrator | 2025-04-14 01:04:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:12.688693 | orchestrator | 2025-04-14 01:04:12 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:12.688729 | orchestrator | 2025-04-14 01:04:12 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:12.689253 | orchestrator | 2025-04-14 01:04:12 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:15.726349 | orchestrator | 2025-04-14 01:04:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:15.726463 | orchestrator | 2025-04-14 01:04:15 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:15.727287 | orchestrator | 2025-04-14 01:04:15 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:15.727313 | orchestrator | 2025-04-14 01:04:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:15.727349 | orchestrator | 2025-04-14 01:04:15 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:15.727905 | orchestrator | 2025-04-14 01:04:15 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:15.728638 | orchestrator | 2025-04-14 01:04:15 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:18.757873 | orchestrator | 2025-04-14 01:04:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:18.758244 | orchestrator | 2025-04-14 01:04:18 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:18.758617 | orchestrator | 2025-04-14 01:04:18 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:18.758670 | orchestrator | 2025-04-14 01:04:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:18.758937 | orchestrator | 2025-04-14 01:04:18 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:18.759623 | orchestrator | 2025-04-14 01:04:18 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:18.760198 | orchestrator | 2025-04-14 01:04:18 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:21.787932 | orchestrator | 2025-04-14 01:04:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:21.788074 | orchestrator | 2025-04-14 01:04:21 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:21.788639 | orchestrator | 2025-04-14 01:04:21 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:21.788676 | orchestrator | 2025-04-14 01:04:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:21.788962 | orchestrator | 2025-04-14 01:04:21 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:21.789527 | orchestrator | 2025-04-14 01:04:21 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:21.790272 | orchestrator | 2025-04-14 01:04:21 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:24.825517 | orchestrator | 2025-04-14 01:04:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:24.825628 | orchestrator | 2025-04-14 01:04:24 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:24.826993 | orchestrator | 2025-04-14 01:04:24 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:24.829252 | orchestrator | 2025-04-14 01:04:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:24.830852 | orchestrator | 2025-04-14 01:04:24 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:24.833253 | orchestrator | 2025-04-14 01:04:24 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:24.835714 | orchestrator | 2025-04-14 01:04:24 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:24.836155 | orchestrator | 2025-04-14 01:04:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:27.867350 | orchestrator | 2025-04-14 01:04:27 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:27.868707 | orchestrator | 2025-04-14 01:04:27 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:27.868763 | orchestrator | 2025-04-14 01:04:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:27.868879 | orchestrator | 2025-04-14 01:04:27 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:27.869300 | orchestrator | 2025-04-14 01:04:27 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state STARTED 2025-04-14 01:04:27.869959 | orchestrator | 2025-04-14 01:04:27 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:30.898863 | orchestrator | 2025-04-14 01:04:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:30.899039 | orchestrator | 2025-04-14 01:04:30 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:30.900152 | orchestrator | 2025-04-14 01:04:30 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:30.900201 | orchestrator | 2025-04-14 01:04:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:30.900518 | orchestrator | 2025-04-14 01:04:30 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:30.900907 | orchestrator | 2025-04-14 01:04:30 | INFO  | Task 70895974-8723-4bdf-8dd0-7e6110c37e9e is in state SUCCESS 2025-04-14 01:04:30.902512 | orchestrator | 2025-04-14 01:04:30.902546 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-04-14 01:04:30.902560 | orchestrator | 2025-04-14 01:04:30.902574 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-04-14 01:04:30.902586 | orchestrator | Monday 14 April 2025 01:02:59 +0000 (0:00:00.185) 0:00:00.186 ********** 2025-04-14 01:04:30.902714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-04-14 01:04:30.902749 | orchestrator | 2025-04-14 01:04:30.902762 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-04-14 01:04:30.902814 | orchestrator | Monday 14 April 2025 01:03:00 +0000 (0:00:00.214) 0:00:00.400 ********** 2025-04-14 01:04:30.902830 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-04-14 01:04:30.902843 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-04-14 01:04:30.902856 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-04-14 01:04:30.902869 | orchestrator | 2025-04-14 01:04:30.902882 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-04-14 01:04:30.902894 | orchestrator | Monday 14 April 2025 01:03:01 +0000 (0:00:01.329) 0:00:01.730 ********** 2025-04-14 01:04:30.902907 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-04-14 01:04:30.902920 | orchestrator | 2025-04-14 01:04:30.902933 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-04-14 01:04:30.902945 | orchestrator | Monday 14 April 2025 01:03:02 +0000 (0:00:01.154) 0:00:02.884 ********** 2025-04-14 01:04:30.902958 | orchestrator | changed: [testbed-manager] 2025-04-14 01:04:30.902977 | orchestrator | 2025-04-14 01:04:30.902990 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-04-14 01:04:30.903002 | orchestrator | Monday 14 April 2025 01:03:03 +0000 (0:00:00.877) 0:00:03.762 ********** 2025-04-14 01:04:30.903015 | orchestrator | changed: [testbed-manager] 2025-04-14 01:04:30.903027 | orchestrator | 2025-04-14 01:04:30.903040 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-04-14 01:04:30.903053 | orchestrator | Monday 14 April 2025 01:03:04 +0000 (0:00:00.958) 0:00:04.720 ********** 2025-04-14 01:04:30.903065 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-04-14 01:04:30.903078 | orchestrator | ok: [testbed-manager] 2025-04-14 01:04:30.903090 | orchestrator | 2025-04-14 01:04:30.903103 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-04-14 01:04:30.903115 | orchestrator | Monday 14 April 2025 01:03:45 +0000 (0:00:40.953) 0:00:45.674 ********** 2025-04-14 01:04:30.903148 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-04-14 01:04:30.903161 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-04-14 01:04:30.903174 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-04-14 01:04:30.903186 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-04-14 01:04:30.903199 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-04-14 01:04:30.903211 | orchestrator | 2025-04-14 01:04:30.903224 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-04-14 01:04:30.903236 | orchestrator | Monday 14 April 2025 01:03:49 +0000 (0:00:04.244) 0:00:49.919 ********** 2025-04-14 01:04:30.903249 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-04-14 01:04:30.903261 | orchestrator | 2025-04-14 01:04:30.903274 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-04-14 01:04:30.903286 | orchestrator | Monday 14 April 2025 01:03:50 +0000 (0:00:00.466) 0:00:50.385 ********** 2025-04-14 01:04:30.903299 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:04:30.903316 | orchestrator | 2025-04-14 01:04:30.903329 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-04-14 01:04:30.903341 | orchestrator | Monday 14 April 2025 01:03:50 +0000 (0:00:00.109) 0:00:50.495 ********** 2025-04-14 01:04:30.903355 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:04:30.903370 | orchestrator | 2025-04-14 01:04:30.903384 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-04-14 01:04:30.903397 | orchestrator | Monday 14 April 2025 01:03:50 +0000 (0:00:00.282) 0:00:50.777 ********** 2025-04-14 01:04:30.903411 | orchestrator | changed: [testbed-manager] 2025-04-14 01:04:30.903425 | orchestrator | 2025-04-14 01:04:30.903440 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-04-14 01:04:30.903453 | orchestrator | Monday 14 April 2025 01:03:51 +0000 (0:00:01.342) 0:00:52.120 ********** 2025-04-14 01:04:30.903467 | orchestrator | changed: [testbed-manager] 2025-04-14 01:04:30.903480 | orchestrator | 2025-04-14 01:04:30.903495 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-04-14 01:04:30.903510 | orchestrator | Monday 14 April 2025 01:03:53 +0000 (0:00:01.160) 0:00:53.280 ********** 2025-04-14 01:04:30.903524 | orchestrator | changed: [testbed-manager] 2025-04-14 01:04:30.903538 | orchestrator | 2025-04-14 01:04:30.903552 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-04-14 01:04:30.903566 | orchestrator | Monday 14 April 2025 01:03:53 +0000 (0:00:00.503) 0:00:53.783 ********** 2025-04-14 01:04:30.903580 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-04-14 01:04:30.903599 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-04-14 01:04:30.903614 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-04-14 01:04:30.903627 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-04-14 01:04:30.903641 | orchestrator | 2025-04-14 01:04:30.903655 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:04:30.903669 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-04-14 01:04:30.903684 | orchestrator | 2025-04-14 01:04:30.903707 | orchestrator | Monday 14 April 2025 01:03:54 +0000 (0:00:01.214) 0:00:54.998 ********** 2025-04-14 01:04:33.925935 | orchestrator | =============================================================================== 2025-04-14 01:04:33.926123 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.95s 2025-04-14 01:04:33.926273 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.24s 2025-04-14 01:04:33.926290 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.34s 2025-04-14 01:04:33.926305 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.33s 2025-04-14 01:04:33.926319 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.21s 2025-04-14 01:04:33.926380 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.16s 2025-04-14 01:04:33.926404 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-04-14 01:04:33.926426 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.96s 2025-04-14 01:04:33.926449 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2025-04-14 01:04:33.926474 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.50s 2025-04-14 01:04:33.926499 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2025-04-14 01:04:33.926514 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-04-14 01:04:33.926528 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-04-14 01:04:33.926543 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2025-04-14 01:04:33.926557 | orchestrator | 2025-04-14 01:04:33.926573 | orchestrator | 2025-04-14 01:04:30 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:33.926588 | orchestrator | 2025-04-14 01:04:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:33.926635 | orchestrator | 2025-04-14 01:04:33 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:33.928561 | orchestrator | 2025-04-14 01:04:33 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:33.928649 | orchestrator | 2025-04-14 01:04:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:33.929307 | orchestrator | 2025-04-14 01:04:33 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:33.929984 | orchestrator | 2025-04-14 01:04:33 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:36.959821 | orchestrator | 2025-04-14 01:04:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:36.959966 | orchestrator | 2025-04-14 01:04:36 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:36.960193 | orchestrator | 2025-04-14 01:04:36 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:36.961808 | orchestrator | 2025-04-14 01:04:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:36.962497 | orchestrator | 2025-04-14 01:04:36 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:36.963729 | orchestrator | 2025-04-14 01:04:36 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:40.004145 | orchestrator | 2025-04-14 01:04:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:40.004292 | orchestrator | 2025-04-14 01:04:40 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:40.005760 | orchestrator | 2025-04-14 01:04:40 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:40.005867 | orchestrator | 2025-04-14 01:04:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:40.005897 | orchestrator | 2025-04-14 01:04:40 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:43.053564 | orchestrator | 2025-04-14 01:04:40 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:43.053671 | orchestrator | 2025-04-14 01:04:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:43.053702 | orchestrator | 2025-04-14 01:04:43 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:43.053920 | orchestrator | 2025-04-14 01:04:43 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:43.054724 | orchestrator | 2025-04-14 01:04:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:43.055443 | orchestrator | 2025-04-14 01:04:43 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:43.056186 | orchestrator | 2025-04-14 01:04:43 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:46.093298 | orchestrator | 2025-04-14 01:04:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:46.093434 | orchestrator | 2025-04-14 01:04:46 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:46.093722 | orchestrator | 2025-04-14 01:04:46 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:46.094656 | orchestrator | 2025-04-14 01:04:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:46.096475 | orchestrator | 2025-04-14 01:04:46 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:49.138294 | orchestrator | 2025-04-14 01:04:46 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:49.138518 | orchestrator | 2025-04-14 01:04:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:49.138559 | orchestrator | 2025-04-14 01:04:49 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:49.139868 | orchestrator | 2025-04-14 01:04:49 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:49.139928 | orchestrator | 2025-04-14 01:04:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:49.140529 | orchestrator | 2025-04-14 01:04:49 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:49.141679 | orchestrator | 2025-04-14 01:04:49 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:52.178541 | orchestrator | 2025-04-14 01:04:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:52.178706 | orchestrator | 2025-04-14 01:04:52 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:52.179594 | orchestrator | 2025-04-14 01:04:52 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:52.182574 | orchestrator | 2025-04-14 01:04:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:52.184292 | orchestrator | 2025-04-14 01:04:52 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:52.184334 | orchestrator | 2025-04-14 01:04:52 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:55.225321 | orchestrator | 2025-04-14 01:04:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:55.225463 | orchestrator | 2025-04-14 01:04:55 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:55.225856 | orchestrator | 2025-04-14 01:04:55 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:55.226864 | orchestrator | 2025-04-14 01:04:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:55.227580 | orchestrator | 2025-04-14 01:04:55 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:55.228519 | orchestrator | 2025-04-14 01:04:55 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:55.229538 | orchestrator | 2025-04-14 01:04:55 | INFO  | Task 52f89e08-0df7-4243-a6cc-ecaeb58056d0 is in state STARTED 2025-04-14 01:04:58.284657 | orchestrator | 2025-04-14 01:04:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:04:58.284808 | orchestrator | 2025-04-14 01:04:58 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:04:58.284929 | orchestrator | 2025-04-14 01:04:58 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:04:58.287545 | orchestrator | 2025-04-14 01:04:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:04:58.290424 | orchestrator | 2025-04-14 01:04:58 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:04:58.291140 | orchestrator | 2025-04-14 01:04:58 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:04:58.292621 | orchestrator | 2025-04-14 01:04:58 | INFO  | Task 52f89e08-0df7-4243-a6cc-ecaeb58056d0 is in state STARTED 2025-04-14 01:05:01.352745 | orchestrator | 2025-04-14 01:04:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:01.353040 | orchestrator | 2025-04-14 01:05:01 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:01.353997 | orchestrator | 2025-04-14 01:05:01 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:01.354082 | orchestrator | 2025-04-14 01:05:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:01.354890 | orchestrator | 2025-04-14 01:05:01 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:05:01.357322 | orchestrator | 2025-04-14 01:05:01 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:01.357877 | orchestrator | 2025-04-14 01:05:01 | INFO  | Task 52f89e08-0df7-4243-a6cc-ecaeb58056d0 is in state STARTED 2025-04-14 01:05:04.403014 | orchestrator | 2025-04-14 01:05:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:04.403150 | orchestrator | 2025-04-14 01:05:04 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:04.403829 | orchestrator | 2025-04-14 01:05:04 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:04.404502 | orchestrator | 2025-04-14 01:05:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:04.405578 | orchestrator | 2025-04-14 01:05:04 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:05:04.406631 | orchestrator | 2025-04-14 01:05:04 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:04.407583 | orchestrator | 2025-04-14 01:05:04 | INFO  | Task 52f89e08-0df7-4243-a6cc-ecaeb58056d0 is in state STARTED 2025-04-14 01:05:07.450340 | orchestrator | 2025-04-14 01:05:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:07.450491 | orchestrator | 2025-04-14 01:05:07 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:07.451028 | orchestrator | 2025-04-14 01:05:07 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:07.451076 | orchestrator | 2025-04-14 01:05:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:07.453223 | orchestrator | 2025-04-14 01:05:07 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state STARTED 2025-04-14 01:05:07.454929 | orchestrator | 2025-04-14 01:05:07 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:07.457005 | orchestrator | 2025-04-14 01:05:07 | INFO  | Task 52f89e08-0df7-4243-a6cc-ecaeb58056d0 is in state SUCCESS 2025-04-14 01:05:07.457404 | orchestrator | 2025-04-14 01:05:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:10.497225 | orchestrator | 2025-04-14 01:05:10 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:10.497442 | orchestrator | 2025-04-14 01:05:10 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:10.498575 | orchestrator | 2025-04-14 01:05:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:10.499886 | orchestrator | 2025-04-14 01:05:10 | INFO  | Task 9718b2ba-71aa-48f1-8ece-ec87c2bd7d09 is in state SUCCESS 2025-04-14 01:05:10.501625 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-04-14 01:05:10.501669 | orchestrator | 2025-04-14 01:05:10.501684 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-04-14 01:05:10.501699 | orchestrator | 2025-04-14 01:05:10.501714 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-04-14 01:05:10.501728 | orchestrator | Monday 14 April 2025 01:03:57 +0000 (0:00:00.383) 0:00:00.383 ********** 2025-04-14 01:05:10.501813 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.501852 | orchestrator | 2025-04-14 01:05:10.501867 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-04-14 01:05:10.501882 | orchestrator | Monday 14 April 2025 01:03:59 +0000 (0:00:02.049) 0:00:02.432 ********** 2025-04-14 01:05:10.501896 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.501910 | orchestrator | 2025-04-14 01:05:10.501924 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-04-14 01:05:10.501938 | orchestrator | Monday 14 April 2025 01:04:00 +0000 (0:00:00.949) 0:00:03.382 ********** 2025-04-14 01:05:10.501952 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.501966 | orchestrator | 2025-04-14 01:05:10.501980 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-04-14 01:05:10.502121 | orchestrator | Monday 14 April 2025 01:04:01 +0000 (0:00:00.907) 0:00:04.289 ********** 2025-04-14 01:05:10.502137 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.502151 | orchestrator | 2025-04-14 01:05:10.502165 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-04-14 01:05:10.502179 | orchestrator | Monday 14 April 2025 01:04:02 +0000 (0:00:01.031) 0:00:05.320 ********** 2025-04-14 01:05:10.502194 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.502209 | orchestrator | 2025-04-14 01:05:10.502225 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-04-14 01:05:10.502247 | orchestrator | Monday 14 April 2025 01:04:03 +0000 (0:00:00.891) 0:00:06.211 ********** 2025-04-14 01:05:10.502264 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.502280 | orchestrator | 2025-04-14 01:05:10.502296 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-04-14 01:05:10.502311 | orchestrator | Monday 14 April 2025 01:04:04 +0000 (0:00:01.031) 0:00:07.243 ********** 2025-04-14 01:05:10.502327 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.502342 | orchestrator | 2025-04-14 01:05:10.502358 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-04-14 01:05:10.502373 | orchestrator | Monday 14 April 2025 01:04:05 +0000 (0:00:01.159) 0:00:08.403 ********** 2025-04-14 01:05:10.502389 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.502405 | orchestrator | 2025-04-14 01:05:10.502420 | orchestrator | TASK [Create admin user] ******************************************************* 2025-04-14 01:05:10.502436 | orchestrator | Monday 14 April 2025 01:04:06 +0000 (0:00:00.981) 0:00:09.384 ********** 2025-04-14 01:05:10.502452 | orchestrator | changed: [testbed-manager] 2025-04-14 01:05:10.502468 | orchestrator | 2025-04-14 01:05:10.502483 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-04-14 01:05:10.502499 | orchestrator | Monday 14 April 2025 01:04:23 +0000 (0:00:16.603) 0:00:25.988 ********** 2025-04-14 01:05:10.502535 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:05:10.502552 | orchestrator | 2025-04-14 01:05:10.502567 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-14 01:05:10.502581 | orchestrator | 2025-04-14 01:05:10.502595 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-14 01:05:10.502609 | orchestrator | Monday 14 April 2025 01:04:23 +0000 (0:00:00.623) 0:00:26.611 ********** 2025-04-14 01:05:10.502623 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:05:10.502636 | orchestrator | 2025-04-14 01:05:10.502650 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-14 01:05:10.502664 | orchestrator | 2025-04-14 01:05:10.502678 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-14 01:05:10.502692 | orchestrator | Monday 14 April 2025 01:04:26 +0000 (0:00:02.153) 0:00:28.765 ********** 2025-04-14 01:05:10.502707 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:05:10.502721 | orchestrator | 2025-04-14 01:05:10.502735 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-04-14 01:05:10.502772 | orchestrator | 2025-04-14 01:05:10.502787 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-04-14 01:05:10.502801 | orchestrator | Monday 14 April 2025 01:04:27 +0000 (0:00:01.709) 0:00:30.475 ********** 2025-04-14 01:05:10.502815 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:05:10.502829 | orchestrator | 2025-04-14 01:05:10.502843 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:05:10.502858 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-04-14 01:05:10.502874 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:05:10.502889 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:05:10.502903 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:05:10.502917 | orchestrator | 2025-04-14 01:05:10.502931 | orchestrator | 2025-04-14 01:05:10.502945 | orchestrator | 2025-04-14 01:05:10.502959 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:05:10.502973 | orchestrator | Monday 14 April 2025 01:04:29 +0000 (0:00:01.397) 0:00:31.873 ********** 2025-04-14 01:05:10.502987 | orchestrator | =============================================================================== 2025-04-14 01:05:10.503001 | orchestrator | Create admin user ------------------------------------------------------ 16.60s 2025-04-14 01:05:10.503029 | orchestrator | Restart ceph manager service -------------------------------------------- 5.26s 2025-04-14 01:05:10.503044 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.05s 2025-04-14 01:05:10.503058 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.16s 2025-04-14 01:05:10.503072 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.03s 2025-04-14 01:05:10.503086 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.03s 2025-04-14 01:05:10.503100 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.98s 2025-04-14 01:05:10.503114 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.95s 2025-04-14 01:05:10.503127 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2025-04-14 01:05:10.503141 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.89s 2025-04-14 01:05:10.503161 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.62s 2025-04-14 01:05:10.503175 | orchestrator | 2025-04-14 01:05:10.503189 | orchestrator | None 2025-04-14 01:05:10.503202 | orchestrator | 2025-04-14 01:05:10.503216 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:05:10.503237 | orchestrator | 2025-04-14 01:05:10.503251 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:05:10.503265 | orchestrator | Monday 14 April 2025 01:03:01 +0000 (0:00:00.464) 0:00:00.464 ********** 2025-04-14 01:05:10.503279 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:05:10.503293 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:05:10.503307 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:05:10.503321 | orchestrator | 2025-04-14 01:05:10.503335 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:05:10.503348 | orchestrator | Monday 14 April 2025 01:03:02 +0000 (0:00:00.703) 0:00:01.168 ********** 2025-04-14 01:05:10.503362 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-04-14 01:05:10.503376 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-04-14 01:05:10.503390 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-04-14 01:05:10.503404 | orchestrator | 2025-04-14 01:05:10.503418 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-04-14 01:05:10.503431 | orchestrator | 2025-04-14 01:05:10.503445 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-14 01:05:10.503459 | orchestrator | Monday 14 April 2025 01:03:02 +0000 (0:00:00.701) 0:00:01.869 ********** 2025-04-14 01:05:10.503473 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:05:10.503488 | orchestrator | 2025-04-14 01:05:10.503502 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-04-14 01:05:10.503516 | orchestrator | Monday 14 April 2025 01:03:03 +0000 (0:00:01.151) 0:00:03.021 ********** 2025-04-14 01:05:10.503530 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-04-14 01:05:10.503544 | orchestrator | 2025-04-14 01:05:10.503558 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-04-14 01:05:10.503572 | orchestrator | Monday 14 April 2025 01:03:07 +0000 (0:00:03.301) 0:00:06.322 ********** 2025-04-14 01:05:10.503585 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-04-14 01:05:10.503599 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-04-14 01:05:10.503613 | orchestrator | 2025-04-14 01:05:10.503627 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-04-14 01:05:10.503646 | orchestrator | Monday 14 April 2025 01:03:13 +0000 (0:00:06.465) 0:00:12.788 ********** 2025-04-14 01:05:10.503660 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-04-14 01:05:10.503674 | orchestrator | 2025-04-14 01:05:10.503688 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-04-14 01:05:10.503702 | orchestrator | Monday 14 April 2025 01:03:17 +0000 (0:00:03.472) 0:00:16.260 ********** 2025-04-14 01:05:10.503715 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:05:10.503729 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-04-14 01:05:10.503765 | orchestrator | 2025-04-14 01:05:10.503781 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-04-14 01:05:10.503795 | orchestrator | Monday 14 April 2025 01:03:21 +0000 (0:00:04.026) 0:00:20.287 ********** 2025-04-14 01:05:10.503808 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:05:10.503823 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-04-14 01:05:10.503836 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-04-14 01:05:10.503850 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-04-14 01:05:10.503864 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-04-14 01:05:10.503878 | orchestrator | 2025-04-14 01:05:10.503892 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-04-14 01:05:10.503906 | orchestrator | Monday 14 April 2025 01:03:36 +0000 (0:00:15.286) 0:00:35.573 ********** 2025-04-14 01:05:10.503929 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-04-14 01:05:10.503944 | orchestrator | 2025-04-14 01:05:10.503958 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-04-14 01:05:10.503972 | orchestrator | Monday 14 April 2025 01:03:41 +0000 (0:00:05.304) 0:00:40.878 ********** 2025-04-14 01:05:10.503997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.504019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.504035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.504051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504157 | orchestrator | 2025-04-14 01:05:10.504171 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-04-14 01:05:10.504186 | orchestrator | Monday 14 April 2025 01:03:44 +0000 (0:00:02.504) 0:00:43.383 ********** 2025-04-14 01:05:10.504200 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-04-14 01:05:10.504214 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-04-14 01:05:10.504228 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-04-14 01:05:10.504242 | orchestrator | 2025-04-14 01:05:10.504256 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-04-14 01:05:10.504282 | orchestrator | Monday 14 April 2025 01:03:47 +0000 (0:00:03.635) 0:00:47.018 ********** 2025-04-14 01:05:10.504296 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:05:10.504310 | orchestrator | 2025-04-14 01:05:10.504325 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-04-14 01:05:10.504338 | orchestrator | Monday 14 April 2025 01:03:48 +0000 (0:00:00.375) 0:00:47.393 ********** 2025-04-14 01:05:10.504352 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:05:10.504366 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:05:10.504380 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:05:10.504394 | orchestrator | 2025-04-14 01:05:10.504408 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-14 01:05:10.504422 | orchestrator | Monday 14 April 2025 01:03:48 +0000 (0:00:00.627) 0:00:48.021 ********** 2025-04-14 01:05:10.504436 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:05:10.504450 | orchestrator | 2025-04-14 01:05:10.504464 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-04-14 01:05:10.504484 | orchestrator | Monday 14 April 2025 01:03:49 +0000 (0:00:01.037) 0:00:49.058 ********** 2025-04-14 01:05:10.504507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.504524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.504539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.504561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.504663 | orchestrator | 2025-04-14 01:05:10.504678 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-04-14 01:05:10.504692 | orchestrator | Monday 14 April 2025 01:03:54 +0000 (0:00:04.571) 0:00:53.630 ********** 2025-04-14 01:05:10.504707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.504730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.504785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.504813 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:05:10.504837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.504861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.504876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.504891 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:05:10.504914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.505419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.505446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.505462 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:05:10.505476 | orchestrator | 2025-04-14 01:05:10.505491 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-04-14 01:05:10.505506 | orchestrator | Monday 14 April 2025 01:03:55 +0000 (0:00:01.266) 0:00:54.896 ********** 2025-04-14 01:05:10.505531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.505547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.505561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.505582 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:05:10.505597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.505613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.505641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.505656 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:05:10.505671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.505686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.505708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.505723 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:05:10.505736 | orchestrator | 2025-04-14 01:05:10.505779 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-04-14 01:05:10.505794 | orchestrator | Monday 14 April 2025 01:03:57 +0000 (0:00:01.630) 0:00:56.526 ********** 2025-04-14 01:05:10.505809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.505832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.505847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.505872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.505888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.505903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.505925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.505940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.505955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.505969 | orchestrator | 2025-04-14 01:05:10.505984 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-04-14 01:05:10.505998 | orchestrator | Monday 14 April 2025 01:04:01 +0000 (0:00:04.491) 0:01:01.018 ********** 2025-04-14 01:05:10.506012 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:05:10.506064 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:05:10.506081 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:05:10.506096 | orchestrator | 2025-04-14 01:05:10.506111 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-04-14 01:05:10.506127 | orchestrator | Monday 14 April 2025 01:04:05 +0000 (0:00:03.724) 0:01:04.743 ********** 2025-04-14 01:05:10.506148 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:05:10.506164 | orchestrator | 2025-04-14 01:05:10.506180 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-04-14 01:05:10.506195 | orchestrator | Monday 14 April 2025 01:04:07 +0000 (0:00:02.214) 0:01:06.958 ********** 2025-04-14 01:05:10.506211 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:05:10.506227 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:05:10.506243 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:05:10.506259 | orchestrator | 2025-04-14 01:05:10.506275 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-04-14 01:05:10.506290 | orchestrator | Monday 14 April 2025 01:04:09 +0000 (0:00:01.885) 0:01:08.843 ********** 2025-04-14 01:05:10.506316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.506334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.506351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.506372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506468 | orchestrator | 2025-04-14 01:05:10.506482 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-04-14 01:05:10.506497 | orchestrator | Monday 14 April 2025 01:04:20 +0000 (0:00:10.319) 0:01:19.162 ********** 2025-04-14 01:05:10.506518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.506542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.506557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.506571 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:05:10.506586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.506602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.506617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.506638 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:05:10.506660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-04-14 01:05:10.506676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.506690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:05:10.506705 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:05:10.506719 | orchestrator | 2025-04-14 01:05:10.506733 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-04-14 01:05:10.506777 | orchestrator | Monday 14 April 2025 01:04:21 +0000 (0:00:01.588) 0:01:20.751 ********** 2025-04-14 01:05:10.506793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.506815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.506839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-04-14 01:05:10.506892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.506969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.507019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:05:10.507036 | orchestrator | 2025-04-14 01:05:10.507051 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-04-14 01:05:10.507065 | orchestrator | Monday 14 April 2025 01:04:24 +0000 (0:00:03.294) 0:01:24.046 ********** 2025-04-14 01:05:10.507079 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:05:10.507093 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:05:10.507107 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:05:10.507121 | orchestrator | 2025-04-14 01:05:10.507136 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-04-14 01:05:10.507150 | orchestrator | Monday 14 April 2025 01:04:25 +0000 (0:00:00.804) 0:01:24.853 ********** 2025-04-14 01:05:10.507163 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:05:10.507177 | orchestrator | 2025-04-14 01:05:10.507191 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-04-14 01:05:10.507205 | orchestrator | Monday 14 April 2025 01:04:28 +0000 (0:00:02.771) 0:01:27.624 ********** 2025-04-14 01:05:10.507219 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:05:10.507233 | orchestrator | 2025-04-14 01:05:10.507246 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-04-14 01:05:10.507260 | orchestrator | Monday 14 April 2025 01:04:30 +0000 (0:00:02.398) 0:01:30.023 ********** 2025-04-14 01:05:10.507274 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:05:10.507294 | orchestrator | 2025-04-14 01:05:10.507308 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-14 01:05:10.507322 | orchestrator | Monday 14 April 2025 01:04:41 +0000 (0:00:11.065) 0:01:41.088 ********** 2025-04-14 01:05:10.507336 | orchestrator | 2025-04-14 01:05:10.507350 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-14 01:05:10.507364 | orchestrator | Monday 14 April 2025 01:04:42 +0000 (0:00:00.152) 0:01:41.241 ********** 2025-04-14 01:05:10.507377 | orchestrator | 2025-04-14 01:05:10.507391 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-04-14 01:05:10.507405 | orchestrator | Monday 14 April 2025 01:04:42 +0000 (0:00:00.514) 0:01:41.755 ********** 2025-04-14 01:05:10.507425 | orchestrator | 2025-04-14 01:05:10.507439 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-04-14 01:05:10.507460 | orchestrator | Monday 14 April 2025 01:04:42 +0000 (0:00:00.181) 0:01:41.937 ********** 2025-04-14 01:05:10.507474 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:05:10.507488 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:05:10.507502 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:05:10.507516 | orchestrator | 2025-04-14 01:05:10.507530 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-04-14 01:05:10.507544 | orchestrator | Monday 14 April 2025 01:04:49 +0000 (0:00:07.037) 0:01:48.975 ********** 2025-04-14 01:05:10.507557 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:05:10.507571 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:05:10.507585 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:05:10.507599 | orchestrator | 2025-04-14 01:05:10.507613 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-04-14 01:05:10.507627 | orchestrator | Monday 14 April 2025 01:04:56 +0000 (0:00:06.661) 0:01:55.637 ********** 2025-04-14 01:05:10.507893 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:05:10.507913 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:05:10.507928 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:05:10.507942 | orchestrator | 2025-04-14 01:05:10.507956 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:05:10.507971 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:05:10.507986 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 01:05:10.508000 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 01:05:10.508014 | orchestrator | 2025-04-14 01:05:10.508028 | orchestrator | 2025-04-14 01:05:10.508043 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:05:10.508063 | orchestrator | Monday 14 April 2025 01:05:08 +0000 (0:00:11.878) 0:02:07.515 ********** 2025-04-14 01:05:13.540269 | orchestrator | =============================================================================== 2025-04-14 01:05:13.540638 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.29s 2025-04-14 01:05:13.540674 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.88s 2025-04-14 01:05:13.540687 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.07s 2025-04-14 01:05:13.540700 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.32s 2025-04-14 01:05:13.540713 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.04s 2025-04-14 01:05:13.540726 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.66s 2025-04-14 01:05:13.540786 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.47s 2025-04-14 01:05:13.540804 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.30s 2025-04-14 01:05:13.540817 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.57s 2025-04-14 01:05:13.540831 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.49s 2025-04-14 01:05:13.540844 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.03s 2025-04-14 01:05:13.540858 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.72s 2025-04-14 01:05:13.540871 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 3.64s 2025-04-14 01:05:13.540885 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.47s 2025-04-14 01:05:13.540898 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.30s 2025-04-14 01:05:13.540912 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.30s 2025-04-14 01:05:13.540949 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.77s 2025-04-14 01:05:13.540962 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.50s 2025-04-14 01:05:13.540976 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.40s 2025-04-14 01:05:13.540989 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.22s 2025-04-14 01:05:13.541004 | orchestrator | 2025-04-14 01:05:10 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:13.541018 | orchestrator | 2025-04-14 01:05:10 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:13.541032 | orchestrator | 2025-04-14 01:05:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:13.541079 | orchestrator | 2025-04-14 01:05:13 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:13.544189 | orchestrator | 2025-04-14 01:05:13 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:13.544240 | orchestrator | 2025-04-14 01:05:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:13.544275 | orchestrator | 2025-04-14 01:05:13 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:13.545928 | orchestrator | 2025-04-14 01:05:13 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:16.581670 | orchestrator | 2025-04-14 01:05:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:16.581853 | orchestrator | 2025-04-14 01:05:16 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:16.582149 | orchestrator | 2025-04-14 01:05:16 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:16.582842 | orchestrator | 2025-04-14 01:05:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:16.583486 | orchestrator | 2025-04-14 01:05:16 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:16.584329 | orchestrator | 2025-04-14 01:05:16 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:16.587047 | orchestrator | 2025-04-14 01:05:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:19.622537 | orchestrator | 2025-04-14 01:05:19 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:19.624895 | orchestrator | 2025-04-14 01:05:19 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:19.625898 | orchestrator | 2025-04-14 01:05:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:19.627134 | orchestrator | 2025-04-14 01:05:19 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:19.629837 | orchestrator | 2025-04-14 01:05:19 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:22.661968 | orchestrator | 2025-04-14 01:05:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:22.662229 | orchestrator | 2025-04-14 01:05:22 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:22.662430 | orchestrator | 2025-04-14 01:05:22 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:22.663475 | orchestrator | 2025-04-14 01:05:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:22.664444 | orchestrator | 2025-04-14 01:05:22 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:22.665344 | orchestrator | 2025-04-14 01:05:22 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:22.665582 | orchestrator | 2025-04-14 01:05:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:25.705127 | orchestrator | 2025-04-14 01:05:25 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:25.705653 | orchestrator | 2025-04-14 01:05:25 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:25.706492 | orchestrator | 2025-04-14 01:05:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:25.707527 | orchestrator | 2025-04-14 01:05:25 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:25.708438 | orchestrator | 2025-04-14 01:05:25 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:25.708538 | orchestrator | 2025-04-14 01:05:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:28.762836 | orchestrator | 2025-04-14 01:05:28 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:28.763469 | orchestrator | 2025-04-14 01:05:28 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:28.764645 | orchestrator | 2025-04-14 01:05:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:28.765538 | orchestrator | 2025-04-14 01:05:28 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:28.766588 | orchestrator | 2025-04-14 01:05:28 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:28.767080 | orchestrator | 2025-04-14 01:05:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:31.803388 | orchestrator | 2025-04-14 01:05:31 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:31.803883 | orchestrator | 2025-04-14 01:05:31 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:31.804901 | orchestrator | 2025-04-14 01:05:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:31.805896 | orchestrator | 2025-04-14 01:05:31 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:31.806757 | orchestrator | 2025-04-14 01:05:31 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:31.806919 | orchestrator | 2025-04-14 01:05:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:34.850984 | orchestrator | 2025-04-14 01:05:34 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:34.851967 | orchestrator | 2025-04-14 01:05:34 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:34.852816 | orchestrator | 2025-04-14 01:05:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:34.856788 | orchestrator | 2025-04-14 01:05:34 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:34.857469 | orchestrator | 2025-04-14 01:05:34 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:34.857711 | orchestrator | 2025-04-14 01:05:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:37.904085 | orchestrator | 2025-04-14 01:05:37 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:37.904971 | orchestrator | 2025-04-14 01:05:37 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:37.905013 | orchestrator | 2025-04-14 01:05:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:37.908970 | orchestrator | 2025-04-14 01:05:37 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:37.910285 | orchestrator | 2025-04-14 01:05:37 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:40.965018 | orchestrator | 2025-04-14 01:05:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:40.965133 | orchestrator | 2025-04-14 01:05:40 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:40.966438 | orchestrator | 2025-04-14 01:05:40 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:40.973890 | orchestrator | 2025-04-14 01:05:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:40.975399 | orchestrator | 2025-04-14 01:05:40 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:40.976476 | orchestrator | 2025-04-14 01:05:40 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:44.033146 | orchestrator | 2025-04-14 01:05:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:44.033403 | orchestrator | 2025-04-14 01:05:44 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:44.034070 | orchestrator | 2025-04-14 01:05:44 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:44.034113 | orchestrator | 2025-04-14 01:05:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:44.035299 | orchestrator | 2025-04-14 01:05:44 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:44.035507 | orchestrator | 2025-04-14 01:05:44 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:47.063371 | orchestrator | 2025-04-14 01:05:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:47.063675 | orchestrator | 2025-04-14 01:05:47 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:47.064121 | orchestrator | 2025-04-14 01:05:47 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:47.064162 | orchestrator | 2025-04-14 01:05:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:47.065227 | orchestrator | 2025-04-14 01:05:47 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:47.065677 | orchestrator | 2025-04-14 01:05:47 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:50.092400 | orchestrator | 2025-04-14 01:05:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:50.092674 | orchestrator | 2025-04-14 01:05:50 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:50.093580 | orchestrator | 2025-04-14 01:05:50 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:50.093638 | orchestrator | 2025-04-14 01:05:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:50.095194 | orchestrator | 2025-04-14 01:05:50 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:50.095867 | orchestrator | 2025-04-14 01:05:50 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:53.130359 | orchestrator | 2025-04-14 01:05:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:53.130452 | orchestrator | 2025-04-14 01:05:53 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:53.130901 | orchestrator | 2025-04-14 01:05:53 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:53.131894 | orchestrator | 2025-04-14 01:05:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:53.133002 | orchestrator | 2025-04-14 01:05:53 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:53.134650 | orchestrator | 2025-04-14 01:05:53 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:56.187833 | orchestrator | 2025-04-14 01:05:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:56.187974 | orchestrator | 2025-04-14 01:05:56 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:56.188817 | orchestrator | 2025-04-14 01:05:56 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:56.190088 | orchestrator | 2025-04-14 01:05:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:56.197540 | orchestrator | 2025-04-14 01:05:56 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:56.198096 | orchestrator | 2025-04-14 01:05:56 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:05:56.198284 | orchestrator | 2025-04-14 01:05:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:05:59.242126 | orchestrator | 2025-04-14 01:05:59 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:05:59.242775 | orchestrator | 2025-04-14 01:05:59 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:05:59.242814 | orchestrator | 2025-04-14 01:05:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:05:59.243432 | orchestrator | 2025-04-14 01:05:59 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:05:59.248589 | orchestrator | 2025-04-14 01:05:59 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:02.271889 | orchestrator | 2025-04-14 01:05:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:02.272153 | orchestrator | 2025-04-14 01:06:02 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:06:02.272869 | orchestrator | 2025-04-14 01:06:02 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:02.272899 | orchestrator | 2025-04-14 01:06:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:02.272922 | orchestrator | 2025-04-14 01:06:02 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:02.273271 | orchestrator | 2025-04-14 01:06:02 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:05.322397 | orchestrator | 2025-04-14 01:06:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:05.322536 | orchestrator | 2025-04-14 01:06:05 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:06:05.324178 | orchestrator | 2025-04-14 01:06:05 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:05.326213 | orchestrator | 2025-04-14 01:06:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:05.326245 | orchestrator | 2025-04-14 01:06:05 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:05.326265 | orchestrator | 2025-04-14 01:06:05 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:08.361840 | orchestrator | 2025-04-14 01:06:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:08.361974 | orchestrator | 2025-04-14 01:06:08 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:06:08.364309 | orchestrator | 2025-04-14 01:06:08 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:08.365343 | orchestrator | 2025-04-14 01:06:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:08.365376 | orchestrator | 2025-04-14 01:06:08 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:08.366940 | orchestrator | 2025-04-14 01:06:08 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:11.429752 | orchestrator | 2025-04-14 01:06:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:11.429872 | orchestrator | 2025-04-14 01:06:11 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:06:11.430641 | orchestrator | 2025-04-14 01:06:11 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:11.432038 | orchestrator | 2025-04-14 01:06:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:11.433543 | orchestrator | 2025-04-14 01:06:11 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:11.434606 | orchestrator | 2025-04-14 01:06:11 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:14.483137 | orchestrator | 2025-04-14 01:06:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:14.483291 | orchestrator | 2025-04-14 01:06:14 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:06:14.484405 | orchestrator | 2025-04-14 01:06:14 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:14.485365 | orchestrator | 2025-04-14 01:06:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:14.486604 | orchestrator | 2025-04-14 01:06:14 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:14.488055 | orchestrator | 2025-04-14 01:06:14 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:17.542349 | orchestrator | 2025-04-14 01:06:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:17.542473 | orchestrator | 2025-04-14 01:06:17 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state STARTED 2025-04-14 01:06:17.544995 | orchestrator | 2025-04-14 01:06:17 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:17.547087 | orchestrator | 2025-04-14 01:06:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:17.548944 | orchestrator | 2025-04-14 01:06:17 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:17.551052 | orchestrator | 2025-04-14 01:06:17 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:20.608506 | orchestrator | 2025-04-14 01:06:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:20.608652 | orchestrator | 2025-04-14 01:06:20 | INFO  | Task fc013d88-90a0-4792-9dd5-59fbb86a0ca2 is in state SUCCESS 2025-04-14 01:06:20.610996 | orchestrator | 2025-04-14 01:06:20.611060 | orchestrator | 2025-04-14 01:06:20.611078 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:06:20.611095 | orchestrator | 2025-04-14 01:06:20.611111 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:06:20.611127 | orchestrator | Monday 14 April 2025 01:03:01 +0000 (0:00:00.540) 0:00:00.540 ********** 2025-04-14 01:06:20.611166 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:06:20.611184 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:06:20.611199 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:06:20.611214 | orchestrator | 2025-04-14 01:06:20.611230 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:06:20.611245 | orchestrator | Monday 14 April 2025 01:03:02 +0000 (0:00:00.852) 0:00:01.393 ********** 2025-04-14 01:06:20.611260 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-04-14 01:06:20.611747 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-04-14 01:06:20.611777 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-04-14 01:06:20.611793 | orchestrator | 2025-04-14 01:06:20.611808 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-04-14 01:06:20.611824 | orchestrator | 2025-04-14 01:06:20.611839 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-14 01:06:20.611854 | orchestrator | Monday 14 April 2025 01:03:03 +0000 (0:00:00.523) 0:00:01.916 ********** 2025-04-14 01:06:20.611870 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:06:20.611887 | orchestrator | 2025-04-14 01:06:20.611902 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-04-14 01:06:20.611977 | orchestrator | Monday 14 April 2025 01:03:04 +0000 (0:00:01.041) 0:00:02.958 ********** 2025-04-14 01:06:20.611994 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-04-14 01:06:20.612134 | orchestrator | 2025-04-14 01:06:20.612152 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-04-14 01:06:20.612166 | orchestrator | Monday 14 April 2025 01:03:07 +0000 (0:00:03.782) 0:00:06.740 ********** 2025-04-14 01:06:20.612180 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-04-14 01:06:20.612194 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-04-14 01:06:20.612209 | orchestrator | 2025-04-14 01:06:20.612223 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-04-14 01:06:20.612237 | orchestrator | Monday 14 April 2025 01:03:14 +0000 (0:00:06.403) 0:00:13.143 ********** 2025-04-14 01:06:20.612251 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-14 01:06:20.612265 | orchestrator | 2025-04-14 01:06:20.612279 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-04-14 01:06:20.612293 | orchestrator | Monday 14 April 2025 01:03:17 +0000 (0:00:03.409) 0:00:16.553 ********** 2025-04-14 01:06:20.613549 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:06:20.613583 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-04-14 01:06:20.613598 | orchestrator | 2025-04-14 01:06:20.613612 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-04-14 01:06:20.613627 | orchestrator | Monday 14 April 2025 01:03:21 +0000 (0:00:04.103) 0:00:20.656 ********** 2025-04-14 01:06:20.613641 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:06:20.613655 | orchestrator | 2025-04-14 01:06:20.613722 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-04-14 01:06:20.613738 | orchestrator | Monday 14 April 2025 01:03:24 +0000 (0:00:03.096) 0:00:23.752 ********** 2025-04-14 01:06:20.613752 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-04-14 01:06:20.613766 | orchestrator | 2025-04-14 01:06:20.613780 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-04-14 01:06:20.613794 | orchestrator | Monday 14 April 2025 01:03:28 +0000 (0:00:04.027) 0:00:27.780 ********** 2025-04-14 01:06:20.613811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.613899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.613919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.613935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.613951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.613966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.613990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.614316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.614357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.614422 | orchestrator | 2025-04-14 01:06:20.614438 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-04-14 01:06:20.614455 | orchestrator | Monday 14 April 2025 01:03:31 +0000 (0:00:03.032) 0:00:30.813 ********** 2025-04-14 01:06:20.614471 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:20.614564 | orchestrator | 2025-04-14 01:06:20.614592 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-04-14 01:06:20.614607 | orchestrator | Monday 14 April 2025 01:03:32 +0000 (0:00:00.131) 0:00:30.945 ********** 2025-04-14 01:06:20.614621 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:20.614636 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:20.614649 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:20.614664 | orchestrator | 2025-04-14 01:06:20.614743 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-14 01:06:20.614757 | orchestrator | Monday 14 April 2025 01:03:32 +0000 (0:00:00.445) 0:00:31.391 ********** 2025-04-14 01:06:20.614771 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:06:20.614785 | orchestrator | 2025-04-14 01:06:20.614799 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-04-14 01:06:20.614813 | orchestrator | Monday 14 April 2025 01:03:33 +0000 (0:00:00.651) 0:00:32.042 ********** 2025-04-14 01:06:20.614828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.614853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.614869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.614923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.614993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.615217 | orchestrator | 2025-04-14 01:06:20.615231 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-04-14 01:06:20.615245 | orchestrator | Monday 14 April 2025 01:03:39 +0000 (0:00:06.171) 0:00:38.213 ********** 2025-04-14 01:06:20.615260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.615281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.615296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615392 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:20.615406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.615430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.615443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615523 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:20.615536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.615556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.615569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615649 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:20.615662 | orchestrator | 2025-04-14 01:06:20.615694 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-04-14 01:06:20.615707 | orchestrator | Monday 14 April 2025 01:03:41 +0000 (0:00:02.391) 0:00:40.605 ********** 2025-04-14 01:06:20.615727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.615741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.615754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615842 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:20.615856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.615869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.615882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.615971 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:20.615984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.615997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.616010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.616022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.616036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.616077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.616099 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:20.616112 | orchestrator | 2025-04-14 01:06:20.616125 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-04-14 01:06:20.616138 | orchestrator | Monday 14 April 2025 01:03:43 +0000 (0:00:02.161) 0:00:42.767 ********** 2025-04-14 01:06:20.616150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.616164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.616177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.616235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.616480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.616508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.616534 | orchestrator | 2025-04-14 01:06:20.616546 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-04-14 01:06:20.616559 | orchestrator | Monday 14 April 2025 01:03:52 +0000 (0:00:08.313) 0:00:51.081 ********** 2025-04-14 01:06:20.616657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.616691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.616767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.616784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.616973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617102 | orchestrator | 2025-04-14 01:06:20.617114 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-04-14 01:06:20.617127 | orchestrator | Monday 14 April 2025 01:04:15 +0000 (0:00:23.414) 0:01:14.496 ********** 2025-04-14 01:06:20.617139 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-14 01:06:20.617153 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-14 01:06:20.617165 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-04-14 01:06:20.617177 | orchestrator | 2025-04-14 01:06:20.617190 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-04-14 01:06:20.617208 | orchestrator | Monday 14 April 2025 01:04:22 +0000 (0:00:07.213) 0:01:21.710 ********** 2025-04-14 01:06:20.617221 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-14 01:06:20.617238 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-14 01:06:20.617251 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-04-14 01:06:20.617263 | orchestrator | 2025-04-14 01:06:20.617276 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-04-14 01:06:20.617289 | orchestrator | Monday 14 April 2025 01:04:26 +0000 (0:00:04.071) 0:01:25.781 ********** 2025-04-14 01:06:20.617304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.617319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.617335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.617361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617652 | orchestrator | 2025-04-14 01:06:20.617682 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-04-14 01:06:20.617696 | orchestrator | Monday 14 April 2025 01:04:30 +0000 (0:00:04.017) 0:01:29.799 ********** 2025-04-14 01:06:20.617710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.617723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.617742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.617756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.617981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.617994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618047 | orchestrator | 2025-04-14 01:06:20.618063 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-14 01:06:20.618076 | orchestrator | Monday 14 April 2025 01:04:33 +0000 (0:00:02.909) 0:01:32.708 ********** 2025-04-14 01:06:20.618088 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:20.618101 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:20.618113 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:20.618126 | orchestrator | 2025-04-14 01:06:20.618138 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-04-14 01:06:20.618151 | orchestrator | Monday 14 April 2025 01:04:34 +0000 (0:00:00.320) 0:01:33.028 ********** 2025-04-14 01:06:20.618170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.618184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.618203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618273 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:20.618286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.618305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.618318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618395 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:20.618408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-04-14 01:06:20.618422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-04-14 01:06:20.618435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618511 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:20.618524 | orchestrator | 2025-04-14 01:06:20.618536 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-04-14 01:06:20.618549 | orchestrator | Monday 14 April 2025 01:04:35 +0000 (0:00:01.398) 0:01:34.427 ********** 2025-04-14 01:06:20.618562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.618575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.618588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-04-14 01:06:20.618612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-04-14 01:06:20.618943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-04-14 01:06:20.618956 | orchestrator | 2025-04-14 01:06:20.618968 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-04-14 01:06:20.618981 | orchestrator | Monday 14 April 2025 01:04:42 +0000 (0:00:06.553) 0:01:40.980 ********** 2025-04-14 01:06:20.618994 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:20.619006 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:20.619019 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:20.619031 | orchestrator | 2025-04-14 01:06:20.619044 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-04-14 01:06:20.619056 | orchestrator | Monday 14 April 2025 01:04:43 +0000 (0:00:01.096) 0:01:42.077 ********** 2025-04-14 01:06:20.619069 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-04-14 01:06:20.619091 | orchestrator | 2025-04-14 01:06:20.619104 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-04-14 01:06:20.619116 | orchestrator | Monday 14 April 2025 01:04:45 +0000 (0:00:02.397) 0:01:44.474 ********** 2025-04-14 01:06:20.619129 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-14 01:06:20.619142 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-04-14 01:06:20.619154 | orchestrator | 2025-04-14 01:06:20.619167 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-04-14 01:06:20.619179 | orchestrator | Monday 14 April 2025 01:04:47 +0000 (0:00:02.383) 0:01:46.858 ********** 2025-04-14 01:06:20.619191 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:20.619204 | orchestrator | 2025-04-14 01:06:20.619217 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-14 01:06:20.619229 | orchestrator | Monday 14 April 2025 01:05:02 +0000 (0:00:14.911) 0:02:01.769 ********** 2025-04-14 01:06:20.619241 | orchestrator | 2025-04-14 01:06:20.619251 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-14 01:06:20.619261 | orchestrator | Monday 14 April 2025 01:05:03 +0000 (0:00:00.177) 0:02:01.946 ********** 2025-04-14 01:06:20.619272 | orchestrator | 2025-04-14 01:06:20.619286 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-04-14 01:06:20.619300 | orchestrator | Monday 14 April 2025 01:05:03 +0000 (0:00:00.110) 0:02:02.057 ********** 2025-04-14 01:06:20.619311 | orchestrator | 2025-04-14 01:06:20.619321 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-04-14 01:06:20.619331 | orchestrator | Monday 14 April 2025 01:05:03 +0000 (0:00:00.064) 0:02:02.122 ********** 2025-04-14 01:06:20.619341 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:06:20.619351 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:20.619361 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:06:20.619372 | orchestrator | 2025-04-14 01:06:20.619382 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-04-14 01:06:20.619392 | orchestrator | Monday 14 April 2025 01:05:17 +0000 (0:00:14.297) 0:02:16.420 ********** 2025-04-14 01:06:20.619402 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:06:20.619412 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:06:20.619422 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:20.619432 | orchestrator | 2025-04-14 01:06:20.619443 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-04-14 01:06:20.619453 | orchestrator | Monday 14 April 2025 01:05:29 +0000 (0:00:11.872) 0:02:28.293 ********** 2025-04-14 01:06:20.619463 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:20.619473 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:06:20.619483 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:06:20.619493 | orchestrator | 2025-04-14 01:06:20.619503 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-04-14 01:06:20.619514 | orchestrator | Monday 14 April 2025 01:05:42 +0000 (0:00:13.132) 0:02:41.425 ********** 2025-04-14 01:06:20.619524 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:06:20.619534 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:20.619544 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:06:20.619554 | orchestrator | 2025-04-14 01:06:20.619564 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-04-14 01:06:20.619574 | orchestrator | Monday 14 April 2025 01:05:56 +0000 (0:00:13.835) 0:02:55.260 ********** 2025-04-14 01:06:20.619584 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:06:20.619595 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:06:20.619605 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:20.619615 | orchestrator | 2025-04-14 01:06:20.619625 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-04-14 01:06:20.619723 | orchestrator | Monday 14 April 2025 01:06:08 +0000 (0:00:11.826) 0:03:07.086 ********** 2025-04-14 01:06:20.619740 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:20.619758 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:06:20.619769 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:06:20.619780 | orchestrator | 2025-04-14 01:06:20.619791 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-04-14 01:06:20.619802 | orchestrator | Monday 14 April 2025 01:06:14 +0000 (0:00:06.057) 0:03:13.143 ********** 2025-04-14 01:06:20.619813 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:20.619825 | orchestrator | 2025-04-14 01:06:20.619836 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:06:20.619848 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:06:20.619859 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 01:06:20.619871 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 01:06:20.619882 | orchestrator | 2025-04-14 01:06:20.619893 | orchestrator | 2025-04-14 01:06:20.619904 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:06:20.619915 | orchestrator | Monday 14 April 2025 01:06:19 +0000 (0:00:04.883) 0:03:18.027 ********** 2025-04-14 01:06:20.619926 | orchestrator | =============================================================================== 2025-04-14 01:06:20.619937 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.41s 2025-04-14 01:06:20.619948 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.91s 2025-04-14 01:06:20.619959 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.30s 2025-04-14 01:06:20.619970 | orchestrator | designate : Restart designate-producer container ----------------------- 13.84s 2025-04-14 01:06:20.619981 | orchestrator | designate : Restart designate-central container ------------------------ 13.13s 2025-04-14 01:06:20.619991 | orchestrator | designate : Restart designate-api container ---------------------------- 11.87s 2025-04-14 01:06:20.620002 | orchestrator | designate : Restart designate-mdns container --------------------------- 11.83s 2025-04-14 01:06:20.620013 | orchestrator | designate : Copying over config.json files for services ----------------- 8.31s 2025-04-14 01:06:20.620024 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.21s 2025-04-14 01:06:20.620035 | orchestrator | designate : Check designate containers ---------------------------------- 6.56s 2025-04-14 01:06:20.620046 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.40s 2025-04-14 01:06:20.620057 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.17s 2025-04-14 01:06:20.620084 | orchestrator | designate : Restart designate-worker container -------------------------- 6.06s 2025-04-14 01:06:20.620095 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 4.88s 2025-04-14 01:06:20.620106 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.10s 2025-04-14 01:06:20.620117 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.07s 2025-04-14 01:06:20.620128 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.03s 2025-04-14 01:06:20.620144 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.02s 2025-04-14 01:06:23.665823 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.78s 2025-04-14 01:06:23.665937 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.41s 2025-04-14 01:06:23.665951 | orchestrator | 2025-04-14 01:06:20 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:23.665961 | orchestrator | 2025-04-14 01:06:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:23.665970 | orchestrator | 2025-04-14 01:06:20 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:23.666003 | orchestrator | 2025-04-14 01:06:20 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:23.666012 | orchestrator | 2025-04-14 01:06:20 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:23.666088 | orchestrator | 2025-04-14 01:06:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:23.666111 | orchestrator | 2025-04-14 01:06:23 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:23.666803 | orchestrator | 2025-04-14 01:06:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:23.668694 | orchestrator | 2025-04-14 01:06:23 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:23.670119 | orchestrator | 2025-04-14 01:06:23 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:23.671520 | orchestrator | 2025-04-14 01:06:23 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:26.736547 | orchestrator | 2025-04-14 01:06:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:26.736728 | orchestrator | 2025-04-14 01:06:26 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:26.738598 | orchestrator | 2025-04-14 01:06:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:26.740948 | orchestrator | 2025-04-14 01:06:26 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:26.742792 | orchestrator | 2025-04-14 01:06:26 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:26.744602 | orchestrator | 2025-04-14 01:06:26 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state STARTED 2025-04-14 01:06:29.792920 | orchestrator | 2025-04-14 01:06:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:29.793058 | orchestrator | 2025-04-14 01:06:29 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:29.799105 | orchestrator | 2025-04-14 01:06:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:29.800476 | orchestrator | 2025-04-14 01:06:29 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:29.802070 | orchestrator | 2025-04-14 01:06:29 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:29.805632 | orchestrator | 2025-04-14 01:06:29.805712 | orchestrator | 2025-04-14 01:06:29.805729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:06:29.805744 | orchestrator | 2025-04-14 01:06:29.805759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:06:29.805774 | orchestrator | Monday 14 April 2025 01:05:14 +0000 (0:00:00.328) 0:00:00.328 ********** 2025-04-14 01:06:29.805788 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:06:29.805820 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:06:29.805837 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:06:29.805851 | orchestrator | 2025-04-14 01:06:29.805866 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:06:29.805881 | orchestrator | Monday 14 April 2025 01:05:15 +0000 (0:00:00.425) 0:00:00.753 ********** 2025-04-14 01:06:29.805895 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-04-14 01:06:29.805910 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-04-14 01:06:29.805924 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-04-14 01:06:29.805939 | orchestrator | 2025-04-14 01:06:29.805953 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-04-14 01:06:29.805988 | orchestrator | 2025-04-14 01:06:29.806003 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-14 01:06:29.806060 | orchestrator | Monday 14 April 2025 01:05:16 +0000 (0:00:00.749) 0:00:01.502 ********** 2025-04-14 01:06:29.806078 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:06:29.806094 | orchestrator | 2025-04-14 01:06:29.806108 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-04-14 01:06:29.806122 | orchestrator | Monday 14 April 2025 01:05:17 +0000 (0:00:01.345) 0:00:02.848 ********** 2025-04-14 01:06:29.806136 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-04-14 01:06:29.806150 | orchestrator | 2025-04-14 01:06:29.806165 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-04-14 01:06:29.806179 | orchestrator | Monday 14 April 2025 01:05:20 +0000 (0:00:03.455) 0:00:06.304 ********** 2025-04-14 01:06:29.806193 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-04-14 01:06:29.806207 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-04-14 01:06:29.806223 | orchestrator | 2025-04-14 01:06:29.806239 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-04-14 01:06:29.806256 | orchestrator | Monday 14 April 2025 01:05:27 +0000 (0:00:06.480) 0:00:12.784 ********** 2025-04-14 01:06:29.806272 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-14 01:06:29.806296 | orchestrator | 2025-04-14 01:06:29.806322 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-04-14 01:06:29.806347 | orchestrator | Monday 14 April 2025 01:05:31 +0000 (0:00:03.857) 0:00:16.641 ********** 2025-04-14 01:06:29.806372 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:06:29.806397 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-04-14 01:06:29.806422 | orchestrator | 2025-04-14 01:06:29.806447 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-04-14 01:06:29.806472 | orchestrator | Monday 14 April 2025 01:05:35 +0000 (0:00:03.896) 0:00:20.538 ********** 2025-04-14 01:06:29.806499 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:06:29.806521 | orchestrator | 2025-04-14 01:06:29.806537 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-04-14 01:06:29.806554 | orchestrator | Monday 14 April 2025 01:05:38 +0000 (0:00:03.195) 0:00:23.733 ********** 2025-04-14 01:06:29.806570 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-04-14 01:06:29.806584 | orchestrator | 2025-04-14 01:06:29.806598 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-14 01:06:29.806612 | orchestrator | Monday 14 April 2025 01:05:42 +0000 (0:00:04.338) 0:00:28.072 ********** 2025-04-14 01:06:29.806626 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:29.806641 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:29.806680 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:29.806701 | orchestrator | 2025-04-14 01:06:29.806715 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-04-14 01:06:29.806737 | orchestrator | Monday 14 April 2025 01:05:43 +0000 (0:00:00.702) 0:00:28.774 ********** 2025-04-14 01:06:29.806753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.806801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.806818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.806833 | orchestrator | 2025-04-14 01:06:29.806847 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-04-14 01:06:29.806862 | orchestrator | Monday 14 April 2025 01:05:45 +0000 (0:00:02.030) 0:00:30.804 ********** 2025-04-14 01:06:29.806876 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:29.806890 | orchestrator | 2025-04-14 01:06:29.806904 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-04-14 01:06:29.806918 | orchestrator | Monday 14 April 2025 01:05:45 +0000 (0:00:00.264) 0:00:31.069 ********** 2025-04-14 01:06:29.806931 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:29.806946 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:29.806959 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:29.806973 | orchestrator | 2025-04-14 01:06:29.806991 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-04-14 01:06:29.807015 | orchestrator | Monday 14 April 2025 01:05:46 +0000 (0:00:00.753) 0:00:31.823 ********** 2025-04-14 01:06:29.807037 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:06:29.807060 | orchestrator | 2025-04-14 01:06:29.807081 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-04-14 01:06:29.807103 | orchestrator | Monday 14 April 2025 01:05:47 +0000 (0:00:01.616) 0:00:33.439 ********** 2025-04-14 01:06:29.807171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.807224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.807250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.807275 | orchestrator | 2025-04-14 01:06:29.807299 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-04-14 01:06:29.807322 | orchestrator | Monday 14 April 2025 01:05:50 +0000 (0:00:02.508) 0:00:35.947 ********** 2025-04-14 01:06:29.807347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.807371 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:29.807407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.807431 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:29.807455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.807470 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:29.807484 | orchestrator | 2025-04-14 01:06:29.807498 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-04-14 01:06:29.807512 | orchestrator | Monday 14 April 2025 01:05:51 +0000 (0:00:01.015) 0:00:36.963 ********** 2025-04-14 01:06:29.807527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.807542 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:29.807566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.807581 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:29.807596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.807617 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:29.807636 | orchestrator | 2025-04-14 01:06:29.807687 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-04-14 01:06:29.807712 | orchestrator | Monday 14 April 2025 01:05:52 +0000 (0:00:01.317) 0:00:38.281 ********** 2025-04-14 01:06:29.807748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.807774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.807799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.807823 | orchestrator | 2025-04-14 01:06:29.807847 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-04-14 01:06:29.807863 | orchestrator | Monday 14 April 2025 01:05:55 +0000 (0:00:02.220) 0:00:40.501 ********** 2025-04-14 01:06:29.807893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.807925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.808050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.808079 | orchestrator | 2025-04-14 01:06:29.808102 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-04-14 01:06:29.808125 | orchestrator | Monday 14 April 2025 01:05:58 +0000 (0:00:03.887) 0:00:44.388 ********** 2025-04-14 01:06:29.808147 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-14 01:06:29.808171 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-14 01:06:29.808196 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-04-14 01:06:29.808245 | orchestrator | 2025-04-14 01:06:29.808270 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-04-14 01:06:29.808293 | orchestrator | Monday 14 April 2025 01:06:01 +0000 (0:00:02.957) 0:00:47.346 ********** 2025-04-14 01:06:29.808318 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:29.808341 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:06:29.808364 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:06:29.808388 | orchestrator | 2025-04-14 01:06:29.808411 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-04-14 01:06:29.808425 | orchestrator | Monday 14 April 2025 01:06:04 +0000 (0:00:02.300) 0:00:49.646 ********** 2025-04-14 01:06:29.808453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.808469 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:06:29.808506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.808522 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:06:29.808548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-04-14 01:06:29.808563 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:06:29.808577 | orchestrator | 2025-04-14 01:06:29.808591 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-04-14 01:06:29.808605 | orchestrator | Monday 14 April 2025 01:06:05 +0000 (0:00:01.211) 0:00:50.858 ********** 2025-04-14 01:06:29.808619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.808642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.808740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-04-14 01:06:29.808760 | orchestrator | 2025-04-14 01:06:29.808774 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-04-14 01:06:29.808789 | orchestrator | Monday 14 April 2025 01:06:06 +0000 (0:00:01.586) 0:00:52.445 ********** 2025-04-14 01:06:29.808802 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:29.808816 | orchestrator | 2025-04-14 01:06:29.808830 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-04-14 01:06:29.808844 | orchestrator | Monday 14 April 2025 01:06:09 +0000 (0:00:02.492) 0:00:54.937 ********** 2025-04-14 01:06:29.808858 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:29.808872 | orchestrator | 2025-04-14 01:06:29.808886 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-04-14 01:06:29.808899 | orchestrator | Monday 14 April 2025 01:06:11 +0000 (0:00:02.320) 0:00:57.257 ********** 2025-04-14 01:06:29.808921 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:32.853120 | orchestrator | 2025-04-14 01:06:32.853235 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-14 01:06:32.853272 | orchestrator | Monday 14 April 2025 01:06:23 +0000 (0:00:11.825) 0:01:09.083 ********** 2025-04-14 01:06:32.853286 | orchestrator | 2025-04-14 01:06:32.853299 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-14 01:06:32.853312 | orchestrator | Monday 14 April 2025 01:06:23 +0000 (0:00:00.059) 0:01:09.143 ********** 2025-04-14 01:06:32.853325 | orchestrator | 2025-04-14 01:06:32.853337 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-04-14 01:06:32.853350 | orchestrator | Monday 14 April 2025 01:06:23 +0000 (0:00:00.190) 0:01:09.333 ********** 2025-04-14 01:06:32.853362 | orchestrator | 2025-04-14 01:06:32.853375 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-04-14 01:06:32.853387 | orchestrator | Monday 14 April 2025 01:06:23 +0000 (0:00:00.060) 0:01:09.394 ********** 2025-04-14 01:06:32.853400 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:06:32.853436 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:06:32.853449 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:06:32.853461 | orchestrator | 2025-04-14 01:06:32.853474 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:06:32.853487 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-04-14 01:06:32.853501 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 01:06:32.853514 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-04-14 01:06:32.853526 | orchestrator | 2025-04-14 01:06:32.853539 | orchestrator | 2025-04-14 01:06:32.853552 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:06:32.853564 | orchestrator | Monday 14 April 2025 01:06:29 +0000 (0:00:05.386) 0:01:14.780 ********** 2025-04-14 01:06:32.853576 | orchestrator | =============================================================================== 2025-04-14 01:06:32.853589 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.83s 2025-04-14 01:06:32.853601 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.48s 2025-04-14 01:06:32.853613 | orchestrator | placement : Restart placement-api container ----------------------------- 5.39s 2025-04-14 01:06:32.853626 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.34s 2025-04-14 01:06:32.853638 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.90s 2025-04-14 01:06:32.853651 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.89s 2025-04-14 01:06:32.853702 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.86s 2025-04-14 01:06:32.853715 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.46s 2025-04-14 01:06:32.853729 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.20s 2025-04-14 01:06:32.853743 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.96s 2025-04-14 01:06:32.853758 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.51s 2025-04-14 01:06:32.853771 | orchestrator | placement : Creating placement databases -------------------------------- 2.49s 2025-04-14 01:06:32.853785 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.32s 2025-04-14 01:06:32.853799 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.30s 2025-04-14 01:06:32.853814 | orchestrator | placement : Copying over config.json files for services ----------------- 2.22s 2025-04-14 01:06:32.853828 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.03s 2025-04-14 01:06:32.853842 | orchestrator | placement : include_tasks ----------------------------------------------- 1.62s 2025-04-14 01:06:32.853856 | orchestrator | placement : Check placement containers ---------------------------------- 1.59s 2025-04-14 01:06:32.853869 | orchestrator | placement : include_tasks ----------------------------------------------- 1.34s 2025-04-14 01:06:32.853883 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.32s 2025-04-14 01:06:32.853897 | orchestrator | 2025-04-14 01:06:29 | INFO  | Task 6067b546-19ce-4a96-9f36-1ceb175418c1 is in state SUCCESS 2025-04-14 01:06:32.853912 | orchestrator | 2025-04-14 01:06:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:32.853943 | orchestrator | 2025-04-14 01:06:32 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:32.854793 | orchestrator | 2025-04-14 01:06:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:32.856269 | orchestrator | 2025-04-14 01:06:32 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:32.857451 | orchestrator | 2025-04-14 01:06:32 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:32.858783 | orchestrator | 2025-04-14 01:06:32 | INFO  | Task 10e138c1-91a3-4240-831d-286e7a39a458 is in state STARTED 2025-04-14 01:06:35.921635 | orchestrator | 2025-04-14 01:06:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:35.921821 | orchestrator | 2025-04-14 01:06:35 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:06:35.923682 | orchestrator | 2025-04-14 01:06:35 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:35.925838 | orchestrator | 2025-04-14 01:06:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:35.927585 | orchestrator | 2025-04-14 01:06:35 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:35.929273 | orchestrator | 2025-04-14 01:06:35 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:35.930516 | orchestrator | 2025-04-14 01:06:35 | INFO  | Task 10e138c1-91a3-4240-831d-286e7a39a458 is in state SUCCESS 2025-04-14 01:06:38.984629 | orchestrator | 2025-04-14 01:06:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:38.984807 | orchestrator | 2025-04-14 01:06:38 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:06:38.985794 | orchestrator | 2025-04-14 01:06:38 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:38.988459 | orchestrator | 2025-04-14 01:06:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:38.991734 | orchestrator | 2025-04-14 01:06:38 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:38.993063 | orchestrator | 2025-04-14 01:06:38 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:38.994011 | orchestrator | 2025-04-14 01:06:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:42.055956 | orchestrator | 2025-04-14 01:06:42 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:06:42.057896 | orchestrator | 2025-04-14 01:06:42 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:42.058592 | orchestrator | 2025-04-14 01:06:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:42.065383 | orchestrator | 2025-04-14 01:06:42 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:45.114427 | orchestrator | 2025-04-14 01:06:42 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:45.114592 | orchestrator | 2025-04-14 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:45.114620 | orchestrator | 2025-04-14 01:06:45 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:06:45.115459 | orchestrator | 2025-04-14 01:06:45 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:45.115482 | orchestrator | 2025-04-14 01:06:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:45.116022 | orchestrator | 2025-04-14 01:06:45 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:45.117143 | orchestrator | 2025-04-14 01:06:45 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:48.166568 | orchestrator | 2025-04-14 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:48.166788 | orchestrator | 2025-04-14 01:06:48 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:06:48.166936 | orchestrator | 2025-04-14 01:06:48 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:48.166965 | orchestrator | 2025-04-14 01:06:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:48.168012 | orchestrator | 2025-04-14 01:06:48 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:48.168452 | orchestrator | 2025-04-14 01:06:48 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:51.217792 | orchestrator | 2025-04-14 01:06:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:51.217941 | orchestrator | 2025-04-14 01:06:51 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:06:51.218891 | orchestrator | 2025-04-14 01:06:51 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:51.220161 | orchestrator | 2025-04-14 01:06:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:51.221403 | orchestrator | 2025-04-14 01:06:51 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:51.222964 | orchestrator | 2025-04-14 01:06:51 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:54.276168 | orchestrator | 2025-04-14 01:06:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:54.276310 | orchestrator | 2025-04-14 01:06:54 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:06:54.276566 | orchestrator | 2025-04-14 01:06:54 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:54.277737 | orchestrator | 2025-04-14 01:06:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:54.278699 | orchestrator | 2025-04-14 01:06:54 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:54.279569 | orchestrator | 2025-04-14 01:06:54 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:06:57.331715 | orchestrator | 2025-04-14 01:06:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:06:57.331855 | orchestrator | 2025-04-14 01:06:57 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:06:57.332139 | orchestrator | 2025-04-14 01:06:57 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:06:57.333084 | orchestrator | 2025-04-14 01:06:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:06:57.336697 | orchestrator | 2025-04-14 01:06:57 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:06:57.336959 | orchestrator | 2025-04-14 01:06:57 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:00.373172 | orchestrator | 2025-04-14 01:06:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:00.373322 | orchestrator | 2025-04-14 01:07:00 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:00.374962 | orchestrator | 2025-04-14 01:07:00 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:00.375003 | orchestrator | 2025-04-14 01:07:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:00.375689 | orchestrator | 2025-04-14 01:07:00 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:00.376356 | orchestrator | 2025-04-14 01:07:00 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:03.412409 | orchestrator | 2025-04-14 01:07:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:03.412551 | orchestrator | 2025-04-14 01:07:03 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:06.459720 | orchestrator | 2025-04-14 01:07:03 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:06.459828 | orchestrator | 2025-04-14 01:07:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:06.459843 | orchestrator | 2025-04-14 01:07:03 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:06.459856 | orchestrator | 2025-04-14 01:07:03 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:06.459868 | orchestrator | 2025-04-14 01:07:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:06.459895 | orchestrator | 2025-04-14 01:07:06 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:06.460753 | orchestrator | 2025-04-14 01:07:06 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:06.463256 | orchestrator | 2025-04-14 01:07:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:06.465741 | orchestrator | 2025-04-14 01:07:06 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:06.468230 | orchestrator | 2025-04-14 01:07:06 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:09.510183 | orchestrator | 2025-04-14 01:07:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:09.510355 | orchestrator | 2025-04-14 01:07:09 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:09.515352 | orchestrator | 2025-04-14 01:07:09 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:09.515402 | orchestrator | 2025-04-14 01:07:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:09.515427 | orchestrator | 2025-04-14 01:07:09 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:12.549878 | orchestrator | 2025-04-14 01:07:09 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:12.550089 | orchestrator | 2025-04-14 01:07:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:12.550137 | orchestrator | 2025-04-14 01:07:12 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:12.550758 | orchestrator | 2025-04-14 01:07:12 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:12.553159 | orchestrator | 2025-04-14 01:07:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:12.553977 | orchestrator | 2025-04-14 01:07:12 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:12.555468 | orchestrator | 2025-04-14 01:07:12 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:12.556161 | orchestrator | 2025-04-14 01:07:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:15.581657 | orchestrator | 2025-04-14 01:07:15 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:15.582635 | orchestrator | 2025-04-14 01:07:15 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:15.582756 | orchestrator | 2025-04-14 01:07:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:15.583753 | orchestrator | 2025-04-14 01:07:15 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:15.584281 | orchestrator | 2025-04-14 01:07:15 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:15.584483 | orchestrator | 2025-04-14 01:07:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:18.607393 | orchestrator | 2025-04-14 01:07:18 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:18.607780 | orchestrator | 2025-04-14 01:07:18 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:18.607824 | orchestrator | 2025-04-14 01:07:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:18.608299 | orchestrator | 2025-04-14 01:07:18 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:18.608823 | orchestrator | 2025-04-14 01:07:18 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:18.608924 | orchestrator | 2025-04-14 01:07:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:21.659133 | orchestrator | 2025-04-14 01:07:21 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:21.659412 | orchestrator | 2025-04-14 01:07:21 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:21.660237 | orchestrator | 2025-04-14 01:07:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:21.660536 | orchestrator | 2025-04-14 01:07:21 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:21.661091 | orchestrator | 2025-04-14 01:07:21 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:24.695442 | orchestrator | 2025-04-14 01:07:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:24.695797 | orchestrator | 2025-04-14 01:07:24 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:24.696408 | orchestrator | 2025-04-14 01:07:24 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:24.696445 | orchestrator | 2025-04-14 01:07:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:24.696468 | orchestrator | 2025-04-14 01:07:24 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:24.697723 | orchestrator | 2025-04-14 01:07:24 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:27.733173 | orchestrator | 2025-04-14 01:07:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:27.733469 | orchestrator | 2025-04-14 01:07:27 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:27.734010 | orchestrator | 2025-04-14 01:07:27 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:27.734115 | orchestrator | 2025-04-14 01:07:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:27.735311 | orchestrator | 2025-04-14 01:07:27 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:27.735659 | orchestrator | 2025-04-14 01:07:27 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:27.735834 | orchestrator | 2025-04-14 01:07:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:30.767505 | orchestrator | 2025-04-14 01:07:30 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:30.767830 | orchestrator | 2025-04-14 01:07:30 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:30.767885 | orchestrator | 2025-04-14 01:07:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:30.769053 | orchestrator | 2025-04-14 01:07:30 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:30.769683 | orchestrator | 2025-04-14 01:07:30 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:33.818169 | orchestrator | 2025-04-14 01:07:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:33.818284 | orchestrator | 2025-04-14 01:07:33 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:33.819226 | orchestrator | 2025-04-14 01:07:33 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:33.821461 | orchestrator | 2025-04-14 01:07:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:33.823737 | orchestrator | 2025-04-14 01:07:33 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:33.825024 | orchestrator | 2025-04-14 01:07:33 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:33.825317 | orchestrator | 2025-04-14 01:07:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:36.878630 | orchestrator | 2025-04-14 01:07:36 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:36.879667 | orchestrator | 2025-04-14 01:07:36 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:36.881756 | orchestrator | 2025-04-14 01:07:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:36.884050 | orchestrator | 2025-04-14 01:07:36 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:36.886488 | orchestrator | 2025-04-14 01:07:36 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:36.887581 | orchestrator | 2025-04-14 01:07:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:39.945098 | orchestrator | 2025-04-14 01:07:39 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:39.948538 | orchestrator | 2025-04-14 01:07:39 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:39.950368 | orchestrator | 2025-04-14 01:07:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:39.951684 | orchestrator | 2025-04-14 01:07:39 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:39.952927 | orchestrator | 2025-04-14 01:07:39 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:43.004474 | orchestrator | 2025-04-14 01:07:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:43.004656 | orchestrator | 2025-04-14 01:07:43 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:43.005936 | orchestrator | 2025-04-14 01:07:43 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:43.008327 | orchestrator | 2025-04-14 01:07:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:43.010100 | orchestrator | 2025-04-14 01:07:43 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:43.010905 | orchestrator | 2025-04-14 01:07:43 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:46.069648 | orchestrator | 2025-04-14 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:46.069821 | orchestrator | 2025-04-14 01:07:46 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:46.070143 | orchestrator | 2025-04-14 01:07:46 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:46.071203 | orchestrator | 2025-04-14 01:07:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:46.072125 | orchestrator | 2025-04-14 01:07:46 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:46.073321 | orchestrator | 2025-04-14 01:07:46 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:49.113145 | orchestrator | 2025-04-14 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:49.113283 | orchestrator | 2025-04-14 01:07:49 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:49.116093 | orchestrator | 2025-04-14 01:07:49 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:49.119259 | orchestrator | 2025-04-14 01:07:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:49.120753 | orchestrator | 2025-04-14 01:07:49 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:49.122732 | orchestrator | 2025-04-14 01:07:49 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:52.178073 | orchestrator | 2025-04-14 01:07:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:52.178224 | orchestrator | 2025-04-14 01:07:52 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:52.178721 | orchestrator | 2025-04-14 01:07:52 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:52.181406 | orchestrator | 2025-04-14 01:07:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:52.183756 | orchestrator | 2025-04-14 01:07:52 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:52.186716 | orchestrator | 2025-04-14 01:07:52 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:52.187310 | orchestrator | 2025-04-14 01:07:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:55.218871 | orchestrator | 2025-04-14 01:07:55 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:55.219524 | orchestrator | 2025-04-14 01:07:55 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:55.220266 | orchestrator | 2025-04-14 01:07:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:55.221319 | orchestrator | 2025-04-14 01:07:55 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:55.222321 | orchestrator | 2025-04-14 01:07:55 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:58.257653 | orchestrator | 2025-04-14 01:07:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:07:58.257907 | orchestrator | 2025-04-14 01:07:58 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:07:58.261998 | orchestrator | 2025-04-14 01:07:58 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:07:58.262090 | orchestrator | 2025-04-14 01:07:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:07:58.263285 | orchestrator | 2025-04-14 01:07:58 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:07:58.263805 | orchestrator | 2025-04-14 01:07:58 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:07:58.266120 | orchestrator | 2025-04-14 01:07:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:01.307695 | orchestrator | 2025-04-14 01:08:01 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:01.309091 | orchestrator | 2025-04-14 01:08:01 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:08:01.310997 | orchestrator | 2025-04-14 01:08:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:01.313316 | orchestrator | 2025-04-14 01:08:01 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:08:01.315443 | orchestrator | 2025-04-14 01:08:01 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:04.373481 | orchestrator | 2025-04-14 01:08:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:04.373683 | orchestrator | 2025-04-14 01:08:04 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:04.375356 | orchestrator | 2025-04-14 01:08:04 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state STARTED 2025-04-14 01:08:04.378582 | orchestrator | 2025-04-14 01:08:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:04.382404 | orchestrator | 2025-04-14 01:08:04 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:08:04.385243 | orchestrator | 2025-04-14 01:08:04 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:07.454945 | orchestrator | 2025-04-14 01:08:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:07.455072 | orchestrator | 2025-04-14 01:08:07 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:07.466061 | orchestrator | 2025-04-14 01:08:07 | INFO  | Task b8e8c0a4-7c7a-4f21-9bcd-30f7679c0bb9 is in state SUCCESS 2025-04-14 01:08:07.468109 | orchestrator | 2025-04-14 01:08:07.468341 | orchestrator | 2025-04-14 01:08:07.468408 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:08:07.468429 | orchestrator | 2025-04-14 01:08:07.468865 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:08:07.468894 | orchestrator | Monday 14 April 2025 01:06:32 +0000 (0:00:00.244) 0:00:00.244 ********** 2025-04-14 01:08:07.468909 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:08:07.468925 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:08:07.468939 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:08:07.468953 | orchestrator | 2025-04-14 01:08:07.468968 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:08:07.468982 | orchestrator | Monday 14 April 2025 01:06:33 +0000 (0:00:00.401) 0:00:00.646 ********** 2025-04-14 01:08:07.468996 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-04-14 01:08:07.469011 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-04-14 01:08:07.469025 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-04-14 01:08:07.469039 | orchestrator | 2025-04-14 01:08:07.469054 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-04-14 01:08:07.469068 | orchestrator | 2025-04-14 01:08:07.469356 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-04-14 01:08:07.469374 | orchestrator | Monday 14 April 2025 01:06:33 +0000 (0:00:00.478) 0:00:01.125 ********** 2025-04-14 01:08:07.469388 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:08:07.469402 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:08:07.469440 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:08:07.469455 | orchestrator | 2025-04-14 01:08:07.469469 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:08:07.469485 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:08:07.469978 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:08:07.469998 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:08:07.470012 | orchestrator | 2025-04-14 01:08:07.470077 | orchestrator | 2025-04-14 01:08:07.470091 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:08:07.470106 | orchestrator | Monday 14 April 2025 01:06:34 +0000 (0:00:00.796) 0:00:01.921 ********** 2025-04-14 01:08:07.470120 | orchestrator | =============================================================================== 2025-04-14 01:08:07.470134 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.80s 2025-04-14 01:08:07.470149 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-04-14 01:08:07.470163 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-04-14 01:08:07.470177 | orchestrator | 2025-04-14 01:08:07.470191 | orchestrator | 2025-04-14 01:08:07.470205 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:08:07.470218 | orchestrator | 2025-04-14 01:08:07.470233 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:08:07.470247 | orchestrator | Monday 14 April 2025 01:03:00 +0000 (0:00:00.662) 0:00:00.662 ********** 2025-04-14 01:08:07.470261 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:08:07.470276 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:08:07.470290 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:08:07.470304 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:08:07.470318 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:08:07.470331 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:08:07.470346 | orchestrator | 2025-04-14 01:08:07.470360 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:08:07.470374 | orchestrator | Monday 14 April 2025 01:03:02 +0000 (0:00:01.300) 0:00:01.962 ********** 2025-04-14 01:08:07.470388 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-04-14 01:08:07.470402 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-04-14 01:08:07.470416 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-04-14 01:08:07.470430 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-04-14 01:08:07.470444 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-04-14 01:08:07.470458 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-04-14 01:08:07.470472 | orchestrator | 2025-04-14 01:08:07.470487 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-04-14 01:08:07.470501 | orchestrator | 2025-04-14 01:08:07.470515 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-14 01:08:07.470529 | orchestrator | Monday 14 April 2025 01:03:03 +0000 (0:00:01.145) 0:00:03.108 ********** 2025-04-14 01:08:07.470543 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:08:07.470596 | orchestrator | 2025-04-14 01:08:07.470612 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-04-14 01:08:07.470629 | orchestrator | Monday 14 April 2025 01:03:05 +0000 (0:00:01.734) 0:00:04.843 ********** 2025-04-14 01:08:07.470645 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:08:07.470661 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:08:07.471019 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:08:07.471044 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:08:07.471085 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:08:07.471107 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:08:07.471130 | orchestrator | 2025-04-14 01:08:07.471152 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-04-14 01:08:07.471174 | orchestrator | Monday 14 April 2025 01:03:06 +0000 (0:00:01.416) 0:00:06.260 ********** 2025-04-14 01:08:07.471197 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:08:07.471219 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:08:07.471243 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:08:07.471267 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:08:07.471290 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:08:07.471367 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:08:07.471384 | orchestrator | 2025-04-14 01:08:07.471399 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-04-14 01:08:07.471413 | orchestrator | Monday 14 April 2025 01:03:07 +0000 (0:00:01.350) 0:00:07.611 ********** 2025-04-14 01:08:07.471427 | orchestrator | ok: [testbed-node-0] => { 2025-04-14 01:08:07.471442 | orchestrator |  "changed": false, 2025-04-14 01:08:07.471789 | orchestrator |  "msg": "All assertions passed" 2025-04-14 01:08:07.472010 | orchestrator | } 2025-04-14 01:08:07.472027 | orchestrator | ok: [testbed-node-1] => { 2025-04-14 01:08:07.472040 | orchestrator |  "changed": false, 2025-04-14 01:08:07.472053 | orchestrator |  "msg": "All assertions passed" 2025-04-14 01:08:07.472065 | orchestrator | } 2025-04-14 01:08:07.472078 | orchestrator | ok: [testbed-node-2] => { 2025-04-14 01:08:07.472090 | orchestrator |  "changed": false, 2025-04-14 01:08:07.472103 | orchestrator |  "msg": "All assertions passed" 2025-04-14 01:08:07.472115 | orchestrator | } 2025-04-14 01:08:07.472127 | orchestrator | ok: [testbed-node-3] => { 2025-04-14 01:08:07.472140 | orchestrator |  "changed": false, 2025-04-14 01:08:07.472152 | orchestrator |  "msg": "All assertions passed" 2025-04-14 01:08:07.472165 | orchestrator | } 2025-04-14 01:08:07.472177 | orchestrator | ok: [testbed-node-4] => { 2025-04-14 01:08:07.472189 | orchestrator |  "changed": false, 2025-04-14 01:08:07.472202 | orchestrator |  "msg": "All assertions passed" 2025-04-14 01:08:07.472214 | orchestrator | } 2025-04-14 01:08:07.472227 | orchestrator | ok: [testbed-node-5] => { 2025-04-14 01:08:07.472239 | orchestrator |  "changed": false, 2025-04-14 01:08:07.472252 | orchestrator |  "msg": "All assertions passed" 2025-04-14 01:08:07.472264 | orchestrator | } 2025-04-14 01:08:07.472277 | orchestrator | 2025-04-14 01:08:07.472289 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-04-14 01:08:07.472302 | orchestrator | Monday 14 April 2025 01:03:08 +0000 (0:00:00.826) 0:00:08.437 ********** 2025-04-14 01:08:07.472314 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.472327 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.472339 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.472352 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.472364 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.472377 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.472389 | orchestrator | 2025-04-14 01:08:07.472913 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-04-14 01:08:07.472939 | orchestrator | Monday 14 April 2025 01:03:09 +0000 (0:00:00.975) 0:00:09.413 ********** 2025-04-14 01:08:07.472950 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-04-14 01:08:07.472969 | orchestrator | 2025-04-14 01:08:07.472980 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-04-14 01:08:07.472991 | orchestrator | Monday 14 April 2025 01:03:12 +0000 (0:00:03.313) 0:00:12.726 ********** 2025-04-14 01:08:07.473009 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-04-14 01:08:07.473097 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-04-14 01:08:07.473337 | orchestrator | 2025-04-14 01:08:07.473363 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-04-14 01:08:07.473712 | orchestrator | Monday 14 April 2025 01:03:19 +0000 (0:00:06.370) 0:00:19.097 ********** 2025-04-14 01:08:07.473732 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-14 01:08:07.473751 | orchestrator | 2025-04-14 01:08:07.473769 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-04-14 01:08:07.473852 | orchestrator | Monday 14 April 2025 01:03:22 +0000 (0:00:03.417) 0:00:22.514 ********** 2025-04-14 01:08:07.473873 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:08:07.473892 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-04-14 01:08:07.473910 | orchestrator | 2025-04-14 01:08:07.473928 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-04-14 01:08:07.473946 | orchestrator | Monday 14 April 2025 01:03:26 +0000 (0:00:03.811) 0:00:26.325 ********** 2025-04-14 01:08:07.473964 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:08:07.473981 | orchestrator | 2025-04-14 01:08:07.473999 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-04-14 01:08:07.474050 | orchestrator | Monday 14 April 2025 01:03:29 +0000 (0:00:03.187) 0:00:29.513 ********** 2025-04-14 01:08:07.474071 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-04-14 01:08:07.474087 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-04-14 01:08:07.474104 | orchestrator | 2025-04-14 01:08:07.474122 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-14 01:08:07.474140 | orchestrator | Monday 14 April 2025 01:03:37 +0000 (0:00:08.085) 0:00:37.599 ********** 2025-04-14 01:08:07.474158 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.474176 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.474194 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.474213 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.474231 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.474249 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.474267 | orchestrator | 2025-04-14 01:08:07.474285 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-04-14 01:08:07.474304 | orchestrator | Monday 14 April 2025 01:03:38 +0000 (0:00:00.734) 0:00:38.333 ********** 2025-04-14 01:08:07.474777 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.474798 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.474816 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.474835 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.474853 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.474869 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.474887 | orchestrator | 2025-04-14 01:08:07.474905 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-04-14 01:08:07.474924 | orchestrator | Monday 14 April 2025 01:03:43 +0000 (0:00:04.456) 0:00:42.790 ********** 2025-04-14 01:08:07.474942 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:08:07.474961 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:08:07.474979 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:08:07.474997 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:08:07.475016 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:08:07.475723 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:08:07.475772 | orchestrator | 2025-04-14 01:08:07.475791 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-04-14 01:08:07.475809 | orchestrator | Monday 14 April 2025 01:03:44 +0000 (0:00:01.195) 0:00:43.985 ********** 2025-04-14 01:08:07.475827 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.475846 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.476027 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.476045 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.476295 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.476315 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.476330 | orchestrator | 2025-04-14 01:08:07.476345 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-04-14 01:08:07.476375 | orchestrator | Monday 14 April 2025 01:03:49 +0000 (0:00:04.819) 0:00:48.804 ********** 2025-04-14 01:08:07.476393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.476414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.476432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.476449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.476547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.476601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.476619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.476638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.476657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.476954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.477106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.477130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.477157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.477174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.477206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.477224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.478322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.478401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.478536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.478590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.478645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.478696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.478779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.478795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.478874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.478889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.478904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.478920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.478975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.478992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.479007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.479023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.479080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.479105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.479141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.479165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.479250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.479372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.479413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.479429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.479459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.479506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.479527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.479600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.479617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.479680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.479711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.479831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.479853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.479916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.479931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.479947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.479962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.479993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.480009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.480031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.480046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.480061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.480077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.480108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.480124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.480139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.480162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.480177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.480203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.480230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.480246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.480267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.480282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.480297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.480322 | orchestrator | 2025-04-14 01:08:07.480348 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-04-14 01:08:07.480382 | orchestrator | Monday 14 April 2025 01:03:53 +0000 (0:00:04.914) 0:00:53.718 ********** 2025-04-14 01:08:07.480406 | orchestrator | [WARNING]: Skipped 2025-04-14 01:08:07.480431 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-04-14 01:08:07.480454 | orchestrator | due to this access issue: 2025-04-14 01:08:07.480480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-04-14 01:08:07.480504 | orchestrator | a directory 2025-04-14 01:08:07.480528 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:08:07.480595 | orchestrator | 2025-04-14 01:08:07.480611 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-14 01:08:07.480626 | orchestrator | Monday 14 April 2025 01:03:55 +0000 (0:00:01.087) 0:00:54.806 ********** 2025-04-14 01:08:07.480641 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:08:07.480657 | orchestrator | 2025-04-14 01:08:07.480671 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-04-14 01:08:07.480686 | orchestrator | Monday 14 April 2025 01:03:57 +0000 (0:00:02.210) 0:00:57.017 ********** 2025-04-14 01:08:07.480701 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.480741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.480758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.480774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.480798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.480823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.480839 | orchestrator | 2025-04-14 01:08:07.480853 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-04-14 01:08:07.480868 | orchestrator | Monday 14 April 2025 01:04:02 +0000 (0:00:04.967) 0:01:01.984 ********** 2025-04-14 01:08:07.480890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.480906 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.480921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.480942 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.480957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.480972 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.480987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.481002 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.481026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.481042 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.481064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.481079 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.481100 | orchestrator | 2025-04-14 01:08:07.481115 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-04-14 01:08:07.481134 | orchestrator | Monday 14 April 2025 01:04:06 +0000 (0:00:04.113) 0:01:06.098 ********** 2025-04-14 01:08:07.481149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.481164 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.481179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.481193 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.481219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.481234 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.481256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.481278 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.481293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.481308 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.481323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.481337 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.481352 | orchestrator | 2025-04-14 01:08:07.481366 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-04-14 01:08:07.481380 | orchestrator | Monday 14 April 2025 01:04:10 +0000 (0:00:04.228) 0:01:10.326 ********** 2025-04-14 01:08:07.481395 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.481409 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.481423 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.481437 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.481451 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.481465 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.481479 | orchestrator | 2025-04-14 01:08:07.481493 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-04-14 01:08:07.481508 | orchestrator | Monday 14 April 2025 01:04:14 +0000 (0:00:04.159) 0:01:14.485 ********** 2025-04-14 01:08:07.481522 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.481536 | orchestrator | 2025-04-14 01:08:07.481616 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-04-14 01:08:07.481635 | orchestrator | Monday 14 April 2025 01:04:14 +0000 (0:00:00.125) 0:01:14.611 ********** 2025-04-14 01:08:07.481649 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.481663 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.481677 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.481691 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.481706 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.481720 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.481734 | orchestrator | 2025-04-14 01:08:07.481748 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-04-14 01:08:07.481763 | orchestrator | Monday 14 April 2025 01:04:15 +0000 (0:00:00.710) 0:01:15.322 ********** 2025-04-14 01:08:07.481796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.481828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.481844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.481859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.481874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.481889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.481916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.481939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.481965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.481981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.481996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.482069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.482099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.482141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.482155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482168 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.482181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.482216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.482271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.482304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.482333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.482361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.482388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.482409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.482453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.482467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482480 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.482493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.482521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.482609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.482649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.482674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.482701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.482748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.482761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.482794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.482819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482832 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.482845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.482865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.482925 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.482957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.482971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.484075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.484132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.484144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.484207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.484224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484262 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.484274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.484284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.484358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.484394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.484427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.484443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484454 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.484472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.484492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.484542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.484647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.484674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.484717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.484728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484743 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.484754 | orchestrator | 2025-04-14 01:08:07.484764 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-04-14 01:08:07.484775 | orchestrator | Monday 14 April 2025 01:04:19 +0000 (0:00:04.151) 0:01:19.473 ********** 2025-04-14 01:08:07.484786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.484809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.484866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.484908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.484930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.484982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.484992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.485014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.485035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.485066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.485088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.485098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.485140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.485151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.485173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.485247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.485268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.485279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.485320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.485381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.485404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.485414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.485440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.485456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.486198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.486308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.486327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.486375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.486385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.486403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.486456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.486522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.486540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.486605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.486615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486624 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.486633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.486656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.486695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.486704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486718 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.486728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.486770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.486810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.486820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.486852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.486875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.486884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.486923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.486933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.486942 | orchestrator | 2025-04-14 01:08:07.486952 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-04-14 01:08:07.486971 | orchestrator | Monday 14 April 2025 01:04:24 +0000 (0:00:04.367) 0:01:23.841 ********** 2025-04-14 01:08:07.486980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.487002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.487065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.487087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.487168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.487241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.487301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.487372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.487428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.487485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.487508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.487568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.487642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.487661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.487687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''2025-04-14 01:08:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:07.487733 | orchestrator | 2025-04-14 01:08:07 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:08:07.487742 | orchestrator | 2025-04-14 01:08:07 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:07.487751 | orchestrator | 2025-04-14 01:08:07 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:07.487760 | orchestrator | 2025-04-14 01:08:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:07.487769 | orchestrator | ], 'dimensions': {}}})  2025-04-14 01:08:07.487779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.487814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.487849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.487859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.487878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.487887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.487930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.487959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.487968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.488069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.488111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.488166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.488175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.488203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.488219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.488290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.488313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.488371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.488395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.488442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488464 | orchestrator | 2025-04-14 01:08:07.488473 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-04-14 01:08:07.488482 | orchestrator | Monday 14 April 2025 01:04:31 +0000 (0:00:07.004) 0:01:30.845 ********** 2025-04-14 01:08:07.488491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.488512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.488583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488615 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.488674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.488722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488745 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.488754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.488763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.488825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.488915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.488924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.488962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.488971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.488987 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.488996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.489005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.489065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.489132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.489155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.489202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.489216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489225 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.489234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.489243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.489302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.489371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.489394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.489433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.489454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.489473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.489482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.489632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.489746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.489792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.489829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.489890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.489899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.489917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.489942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.489969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.489979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.489992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.490001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490037 | orchestrator | 2025-04-14 01:08:07.490049 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-04-14 01:08:07.490058 | orchestrator | Monday 14 April 2025 01:04:34 +0000 (0:00:03.091) 0:01:33.937 ********** 2025-04-14 01:08:07.490078 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.490088 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.490097 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:08:07.490106 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:08:07.490114 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.490123 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:07.490131 | orchestrator | 2025-04-14 01:08:07.490139 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-04-14 01:08:07.490147 | orchestrator | Monday 14 April 2025 01:04:39 +0000 (0:00:05.309) 0:01:39.246 ********** 2025-04-14 01:08:07.490159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.490167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.490225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.490279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.490309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.490342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.490350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490359 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.490384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.490394 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.490443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.490498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.490532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.490587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.490603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.490624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490642 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.490650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.490691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.490793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.490817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.490876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.490885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490893 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.490902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.490921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.490960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.490969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.490998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.491008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.491029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.491047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.491073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.491097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.491106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.491140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.491180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.491204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.491223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.491245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.491262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.491271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.491311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.491320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.491343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.491394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.491415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.491430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.491463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.491489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.491497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.491530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.491539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.491547 | orchestrator | 2025-04-14 01:08:07.491575 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-04-14 01:08:07.491584 | orchestrator | Monday 14 April 2025 01:04:45 +0000 (0:00:05.667) 0:01:44.913 ********** 2025-04-14 01:08:07.491592 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.491600 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.491608 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.491616 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.491624 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.491632 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.491640 | orchestrator | 2025-04-14 01:08:07.491648 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-04-14 01:08:07.491656 | orchestrator | Monday 14 April 2025 01:04:47 +0000 (0:00:02.462) 0:01:47.375 ********** 2025-04-14 01:08:07.491665 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.491673 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.491684 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.491692 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.491700 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.491708 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.491716 | orchestrator | 2025-04-14 01:08:07.491724 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-04-14 01:08:07.491732 | orchestrator | Monday 14 April 2025 01:04:51 +0000 (0:00:03.787) 0:01:51.162 ********** 2025-04-14 01:08:07.491740 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.491748 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.491756 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.491764 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.491772 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.491780 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.491788 | orchestrator | 2025-04-14 01:08:07.491796 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-04-14 01:08:07.491804 | orchestrator | Monday 14 April 2025 01:04:55 +0000 (0:00:03.658) 0:01:54.820 ********** 2025-04-14 01:08:07.491812 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.491820 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.491828 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.491836 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.491844 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.491860 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.491868 | orchestrator | 2025-04-14 01:08:07.491876 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-04-14 01:08:07.491884 | orchestrator | Monday 14 April 2025 01:04:59 +0000 (0:00:04.570) 0:01:59.391 ********** 2025-04-14 01:08:07.491892 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.491900 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.491908 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.491916 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.491924 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.491932 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.491940 | orchestrator | 2025-04-14 01:08:07.491948 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-04-14 01:08:07.491956 | orchestrator | Monday 14 April 2025 01:05:03 +0000 (0:00:04.118) 0:02:03.510 ********** 2025-04-14 01:08:07.491964 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.491972 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.491980 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.491988 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.491996 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.492004 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.492011 | orchestrator | 2025-04-14 01:08:07.492020 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-04-14 01:08:07.492031 | orchestrator | Monday 14 April 2025 01:05:07 +0000 (0:00:04.233) 0:02:07.744 ********** 2025-04-14 01:08:07.492039 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-14 01:08:07.492048 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.492070 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-14 01:08:07.492079 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.492087 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-14 01:08:07.492095 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.492104 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-14 01:08:07.492112 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.492120 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-14 01:08:07.492128 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.492136 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-04-14 01:08:07.492144 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.492152 | orchestrator | 2025-04-14 01:08:07.492160 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-04-14 01:08:07.492168 | orchestrator | Monday 14 April 2025 01:05:11 +0000 (0:00:03.182) 0:02:10.927 ********** 2025-04-14 01:08:07.492183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.492193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.492245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.492312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.492331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.492369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.492391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492400 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.492408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.492417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.492476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.492531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.492603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.492647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.492656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492665 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.492687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.492697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.492718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.492808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.492853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.492897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.492941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.492969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.492999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.493014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.493069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493078 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.493098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.493129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.493154 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.493185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.493220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.493297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.493343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493359 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.493367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.493389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.493429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.493508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.493566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493582 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.493589 | orchestrator | 2025-04-14 01:08:07.493596 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-04-14 01:08:07.493603 | orchestrator | Monday 14 April 2025 01:05:14 +0000 (0:00:03.571) 0:02:14.499 ********** 2025-04-14 01:08:07.493610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.493622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.493669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.493741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.493790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493805 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.493813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.493824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.493871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.493948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.493959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.493967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.493990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.493998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494006 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.494013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.494045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.494093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.494151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.494167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.494200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.494217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494226 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.494234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.494391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.494435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494462 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.494486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.494502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.494529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.494546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494567 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.494575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.494587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.494626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.494669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.494696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.494723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.494731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494738 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.494756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.494770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.494809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.494851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.494875 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.494888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.494904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.494912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.494919 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.494926 | orchestrator | 2025-04-14 01:08:07.494933 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-04-14 01:08:07.494940 | orchestrator | Monday 14 April 2025 01:05:17 +0000 (0:00:02.918) 0:02:17.417 ********** 2025-04-14 01:08:07.494947 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.494955 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.494962 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.494973 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.494981 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.494988 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495000 | orchestrator | 2025-04-14 01:08:07.495007 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-04-14 01:08:07.495015 | orchestrator | Monday 14 April 2025 01:05:21 +0000 (0:00:03.509) 0:02:20.927 ********** 2025-04-14 01:08:07.495022 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495029 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495037 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495044 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:08:07.495050 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:08:07.495057 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:08:07.495064 | orchestrator | 2025-04-14 01:08:07.495071 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-04-14 01:08:07.495088 | orchestrator | Monday 14 April 2025 01:05:27 +0000 (0:00:06.502) 0:02:27.430 ********** 2025-04-14 01:08:07.495097 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495104 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495111 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495118 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495125 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495132 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495139 | orchestrator | 2025-04-14 01:08:07.495146 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-04-14 01:08:07.495153 | orchestrator | Monday 14 April 2025 01:05:30 +0000 (0:00:02.749) 0:02:30.180 ********** 2025-04-14 01:08:07.495160 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495168 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495174 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495182 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495189 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495196 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495203 | orchestrator | 2025-04-14 01:08:07.495210 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-04-14 01:08:07.495217 | orchestrator | Monday 14 April 2025 01:05:34 +0000 (0:00:04.036) 0:02:34.217 ********** 2025-04-14 01:08:07.495224 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495231 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495238 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495245 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495252 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495259 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495266 | orchestrator | 2025-04-14 01:08:07.495273 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-04-14 01:08:07.495280 | orchestrator | Monday 14 April 2025 01:05:37 +0000 (0:00:02.770) 0:02:36.987 ********** 2025-04-14 01:08:07.495287 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495295 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495302 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495309 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495316 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495323 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495330 | orchestrator | 2025-04-14 01:08:07.495337 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-04-14 01:08:07.495344 | orchestrator | Monday 14 April 2025 01:05:39 +0000 (0:00:02.711) 0:02:39.699 ********** 2025-04-14 01:08:07.495351 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495358 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495366 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495373 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495380 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495387 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495398 | orchestrator | 2025-04-14 01:08:07.495406 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-04-14 01:08:07.495413 | orchestrator | Monday 14 April 2025 01:05:42 +0000 (0:00:02.802) 0:02:42.502 ********** 2025-04-14 01:08:07.495420 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495427 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495434 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495441 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495448 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495455 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495462 | orchestrator | 2025-04-14 01:08:07.495469 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-04-14 01:08:07.495476 | orchestrator | Monday 14 April 2025 01:05:49 +0000 (0:00:06.619) 0:02:49.122 ********** 2025-04-14 01:08:07.495483 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495490 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495497 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495504 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495511 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495518 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495525 | orchestrator | 2025-04-14 01:08:07.495532 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-04-14 01:08:07.495539 | orchestrator | Monday 14 April 2025 01:05:52 +0000 (0:00:03.306) 0:02:52.428 ********** 2025-04-14 01:08:07.495546 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495566 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495576 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495583 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495590 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495597 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495604 | orchestrator | 2025-04-14 01:08:07.495611 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-04-14 01:08:07.495618 | orchestrator | Monday 14 April 2025 01:05:55 +0000 (0:00:03.236) 0:02:55.665 ********** 2025-04-14 01:08:07.495625 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-14 01:08:07.495633 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.495640 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-14 01:08:07.495647 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.495654 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-14 01:08:07.495661 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.495668 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-14 01:08:07.495675 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.495685 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-14 01:08:07.495692 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.495699 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-04-14 01:08:07.495706 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.495713 | orchestrator | 2025-04-14 01:08:07.495732 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-04-14 01:08:07.495740 | orchestrator | Monday 14 April 2025 01:06:00 +0000 (0:00:04.871) 0:03:00.537 ********** 2025-04-14 01:08:07.495747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.495760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.495794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.495816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.495856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.495877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.495885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.495918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.495930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.495954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.495961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.495978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.495991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.495998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.496081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.496096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.496111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496163 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.496171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.496178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496233 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.496240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.496259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.496293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496308 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.496315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.496340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.496372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.496443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.496478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496493 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.496501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.496512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.496587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.496662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.496696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496711 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.496719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.496730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.496771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.496852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.496859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.496885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.496893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496901 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.496908 | orchestrator | 2025-04-14 01:08:07.496915 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-04-14 01:08:07.496922 | orchestrator | Monday 14 April 2025 01:06:03 +0000 (0:00:02.796) 0:03:03.334 ********** 2025-04-14 01:08:07.496933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.496941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.496975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.497025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.497075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.497087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.497133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.497202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.497208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.497243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.497250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.497326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.497374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.497401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-04-14 01:08:07.497432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.497467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.497520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.497562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-04-14 01:08:07.497590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-04-14 01:08:07.497629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.497673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.497693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.497719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.497746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.497759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.497792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-04-14 01:08:07.497812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:08:07.497835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:08:07.497845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-04-14 01:08:07.497865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'index.docker.io/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-04-14 01:08:07.497871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'index.docker.io/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-04-14 01:08:07.497878 | orchestrator | 2025-04-14 01:08:07.497884 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-04-14 01:08:07.497891 | orchestrator | Monday 14 April 2025 01:06:07 +0000 (0:00:03.748) 0:03:07.082 ********** 2025-04-14 01:08:07.497901 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:07.497907 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:07.497913 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:07.497920 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:08:07.497926 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:08:07.497932 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:08:07.497938 | orchestrator | 2025-04-14 01:08:07.497945 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-04-14 01:08:07.497951 | orchestrator | Monday 14 April 2025 01:06:08 +0000 (0:00:00.763) 0:03:07.845 ********** 2025-04-14 01:08:07.497957 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:07.497963 | orchestrator | 2025-04-14 01:08:07.497970 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-04-14 01:08:07.497976 | orchestrator | Monday 14 April 2025 01:06:10 +0000 (0:00:02.613) 0:03:10.459 ********** 2025-04-14 01:08:07.497982 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:07.497988 | orchestrator | 2025-04-14 01:08:07.497995 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-04-14 01:08:07.498001 | orchestrator | Monday 14 April 2025 01:06:13 +0000 (0:00:02.340) 0:03:12.799 ********** 2025-04-14 01:08:07.498007 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:07.498013 | orchestrator | 2025-04-14 01:08:07.498056 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-14 01:08:07.498062 | orchestrator | Monday 14 April 2025 01:06:50 +0000 (0:00:37.904) 0:03:50.703 ********** 2025-04-14 01:08:07.498069 | orchestrator | 2025-04-14 01:08:07.498075 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-14 01:08:07.498081 | orchestrator | Monday 14 April 2025 01:06:51 +0000 (0:00:00.068) 0:03:50.771 ********** 2025-04-14 01:08:07.498087 | orchestrator | 2025-04-14 01:08:07.498094 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-14 01:08:07.498100 | orchestrator | Monday 14 April 2025 01:06:51 +0000 (0:00:00.285) 0:03:51.057 ********** 2025-04-14 01:08:07.498106 | orchestrator | 2025-04-14 01:08:07.498112 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-14 01:08:07.498119 | orchestrator | Monday 14 April 2025 01:06:51 +0000 (0:00:00.063) 0:03:51.120 ********** 2025-04-14 01:08:07.498125 | orchestrator | 2025-04-14 01:08:07.498131 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-14 01:08:07.498141 | orchestrator | Monday 14 April 2025 01:06:51 +0000 (0:00:00.057) 0:03:51.178 ********** 2025-04-14 01:08:10.531657 | orchestrator | 2025-04-14 01:08:10.531799 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-04-14 01:08:10.531820 | orchestrator | Monday 14 April 2025 01:06:51 +0000 (0:00:00.057) 0:03:51.235 ********** 2025-04-14 01:08:10.531835 | orchestrator | 2025-04-14 01:08:10.531850 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-04-14 01:08:10.531864 | orchestrator | Monday 14 April 2025 01:06:51 +0000 (0:00:00.283) 0:03:51.519 ********** 2025-04-14 01:08:10.531878 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:10.531893 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:08:10.531908 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:08:10.531922 | orchestrator | 2025-04-14 01:08:10.531936 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-04-14 01:08:10.531951 | orchestrator | Monday 14 April 2025 01:07:16 +0000 (0:00:25.175) 0:04:16.694 ********** 2025-04-14 01:08:10.531965 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:08:10.531979 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:08:10.531993 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:08:10.532007 | orchestrator | 2025-04-14 01:08:10.532021 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:08:10.532036 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-04-14 01:08:10.532084 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-14 01:08:10.532098 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-04-14 01:08:10.532113 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-14 01:08:10.532127 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-14 01:08:10.532142 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-04-14 01:08:10.532156 | orchestrator | 2025-04-14 01:08:10.532170 | orchestrator | 2025-04-14 01:08:10.532185 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:08:10.532199 | orchestrator | Monday 14 April 2025 01:08:04 +0000 (0:00:47.796) 0:05:04.491 ********** 2025-04-14 01:08:10.532213 | orchestrator | =============================================================================== 2025-04-14 01:08:10.532226 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 47.80s 2025-04-14 01:08:10.532240 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 37.90s 2025-04-14 01:08:10.532254 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.18s 2025-04-14 01:08:10.532268 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.09s 2025-04-14 01:08:10.532282 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.00s 2025-04-14 01:08:10.532311 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 6.62s 2025-04-14 01:08:10.532325 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.50s 2025-04-14 01:08:10.532339 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.37s 2025-04-14 01:08:10.532353 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.67s 2025-04-14 01:08:10.532367 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 5.31s 2025-04-14 01:08:10.532381 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.97s 2025-04-14 01:08:10.532395 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.91s 2025-04-14 01:08:10.532408 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.87s 2025-04-14 01:08:10.532423 | orchestrator | Setting sysctl values --------------------------------------------------- 4.82s 2025-04-14 01:08:10.532437 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 4.57s 2025-04-14 01:08:10.532451 | orchestrator | Load and persist kernel modules ----------------------------------------- 4.46s 2025-04-14 01:08:10.532464 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.37s 2025-04-14 01:08:10.532478 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 4.23s 2025-04-14 01:08:10.532492 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.23s 2025-04-14 01:08:10.532506 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.16s 2025-04-14 01:08:10.532540 | orchestrator | 2025-04-14 01:08:10 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:10.533707 | orchestrator | 2025-04-14 01:08:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:10.533738 | orchestrator | 2025-04-14 01:08:10 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:08:10.534760 | orchestrator | 2025-04-14 01:08:10 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:10.535857 | orchestrator | 2025-04-14 01:08:10 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:10.536168 | orchestrator | 2025-04-14 01:08:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:13.594423 | orchestrator | 2025-04-14 01:08:13 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:13.594925 | orchestrator | 2025-04-14 01:08:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:13.595898 | orchestrator | 2025-04-14 01:08:13 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:08:13.597180 | orchestrator | 2025-04-14 01:08:13 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:13.597834 | orchestrator | 2025-04-14 01:08:13 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:16.638304 | orchestrator | 2025-04-14 01:08:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:16.638432 | orchestrator | 2025-04-14 01:08:16 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:16.639525 | orchestrator | 2025-04-14 01:08:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:16.642756 | orchestrator | 2025-04-14 01:08:16 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:08:16.643437 | orchestrator | 2025-04-14 01:08:16 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:16.647150 | orchestrator | 2025-04-14 01:08:16 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:19.706013 | orchestrator | 2025-04-14 01:08:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:19.706209 | orchestrator | 2025-04-14 01:08:19 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:19.706305 | orchestrator | 2025-04-14 01:08:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:19.706330 | orchestrator | 2025-04-14 01:08:19 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:08:19.707009 | orchestrator | 2025-04-14 01:08:19 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:19.708234 | orchestrator | 2025-04-14 01:08:19 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:22.750404 | orchestrator | 2025-04-14 01:08:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:22.750625 | orchestrator | 2025-04-14 01:08:22 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:22.750903 | orchestrator | 2025-04-14 01:08:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:22.750943 | orchestrator | 2025-04-14 01:08:22 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state STARTED 2025-04-14 01:08:22.753620 | orchestrator | 2025-04-14 01:08:22 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:25.794785 | orchestrator | 2025-04-14 01:08:22 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:25.794906 | orchestrator | 2025-04-14 01:08:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:25.794945 | orchestrator | 2025-04-14 01:08:25 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:25.798242 | orchestrator | 2025-04-14 01:08:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:25.798403 | orchestrator | 2025-04-14 01:08:25.798429 | orchestrator | 2025-04-14 01:08:25.798444 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:08:25.798459 | orchestrator | 2025-04-14 01:08:25.798473 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:08:25.798487 | orchestrator | Monday 14 April 2025 01:06:22 +0000 (0:00:00.311) 0:00:00.311 ********** 2025-04-14 01:08:25.798501 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:08:25.798517 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:08:25.798571 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:08:25.798587 | orchestrator | 2025-04-14 01:08:25.798601 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:08:25.798616 | orchestrator | Monday 14 April 2025 01:06:22 +0000 (0:00:00.408) 0:00:00.719 ********** 2025-04-14 01:08:25.798630 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-04-14 01:08:25.798644 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-04-14 01:08:25.798659 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-04-14 01:08:25.798673 | orchestrator | 2025-04-14 01:08:25.798687 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-04-14 01:08:25.798701 | orchestrator | 2025-04-14 01:08:25.798715 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-14 01:08:25.798729 | orchestrator | Monday 14 April 2025 01:06:23 +0000 (0:00:00.298) 0:00:01.018 ********** 2025-04-14 01:08:25.798743 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:08:25.798758 | orchestrator | 2025-04-14 01:08:25.798772 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-04-14 01:08:25.798786 | orchestrator | Monday 14 April 2025 01:06:24 +0000 (0:00:00.732) 0:00:01.750 ********** 2025-04-14 01:08:25.798801 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-04-14 01:08:25.798814 | orchestrator | 2025-04-14 01:08:25.798828 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-04-14 01:08:25.798842 | orchestrator | Monday 14 April 2025 01:06:27 +0000 (0:00:03.517) 0:00:05.267 ********** 2025-04-14 01:08:25.798856 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-04-14 01:08:25.798871 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-04-14 01:08:25.798885 | orchestrator | 2025-04-14 01:08:25.798899 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-04-14 01:08:25.798913 | orchestrator | Monday 14 April 2025 01:06:34 +0000 (0:00:06.522) 0:00:11.790 ********** 2025-04-14 01:08:25.798927 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-14 01:08:25.798941 | orchestrator | 2025-04-14 01:08:25.798955 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-04-14 01:08:25.798969 | orchestrator | Monday 14 April 2025 01:06:37 +0000 (0:00:03.399) 0:00:15.189 ********** 2025-04-14 01:08:25.798984 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:08:25.798998 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-04-14 01:08:25.799028 | orchestrator | 2025-04-14 01:08:25.799042 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-04-14 01:08:25.799056 | orchestrator | Monday 14 April 2025 01:06:41 +0000 (0:00:03.783) 0:00:18.972 ********** 2025-04-14 01:08:25.799070 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:08:25.799084 | orchestrator | 2025-04-14 01:08:25.799098 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-04-14 01:08:25.799112 | orchestrator | Monday 14 April 2025 01:06:44 +0000 (0:00:03.247) 0:00:22.220 ********** 2025-04-14 01:08:25.799126 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-04-14 01:08:25.799140 | orchestrator | 2025-04-14 01:08:25.799154 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-04-14 01:08:25.799193 | orchestrator | Monday 14 April 2025 01:06:48 +0000 (0:00:04.354) 0:00:26.574 ********** 2025-04-14 01:08:25.799218 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:25.799233 | orchestrator | 2025-04-14 01:08:25.799247 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-04-14 01:08:25.799261 | orchestrator | Monday 14 April 2025 01:06:52 +0000 (0:00:03.730) 0:00:30.305 ********** 2025-04-14 01:08:25.799275 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:25.799289 | orchestrator | 2025-04-14 01:08:25.799303 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-04-14 01:08:25.799317 | orchestrator | Monday 14 April 2025 01:06:56 +0000 (0:00:04.064) 0:00:34.370 ********** 2025-04-14 01:08:25.799331 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:25.799345 | orchestrator | 2025-04-14 01:08:25.799359 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-04-14 01:08:25.799373 | orchestrator | Monday 14 April 2025 01:06:59 +0000 (0:00:03.270) 0:00:37.640 ********** 2025-04-14 01:08:25.799403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.799424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.799439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.799455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.799510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.799562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.799580 | orchestrator | 2025-04-14 01:08:25.799594 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-04-14 01:08:25.799609 | orchestrator | Monday 14 April 2025 01:07:02 +0000 (0:00:02.608) 0:00:40.248 ********** 2025-04-14 01:08:25.799623 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:25.799637 | orchestrator | 2025-04-14 01:08:25.799651 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-04-14 01:08:25.799666 | orchestrator | Monday 14 April 2025 01:07:02 +0000 (0:00:00.161) 0:00:40.409 ********** 2025-04-14 01:08:25.799679 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:25.799693 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:25.799707 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:25.799721 | orchestrator | 2025-04-14 01:08:25.799735 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-04-14 01:08:25.799749 | orchestrator | Monday 14 April 2025 01:07:03 +0000 (0:00:00.486) 0:00:40.896 ********** 2025-04-14 01:08:25.799763 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:08:25.799777 | orchestrator | 2025-04-14 01:08:25.799791 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-04-14 01:08:25.799805 | orchestrator | Monday 14 April 2025 01:07:03 +0000 (0:00:00.525) 0:00:41.422 ********** 2025-04-14 01:08:25.799819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.799841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.799857 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:25.799872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.799908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.799924 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:25.799939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.799954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.799976 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:25.799990 | orchestrator | 2025-04-14 01:08:25.800004 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-04-14 01:08:25.800018 | orchestrator | Monday 14 April 2025 01:07:04 +0000 (0:00:00.905) 0:00:42.328 ********** 2025-04-14 01:08:25.800032 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:25.800046 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:25.800060 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:25.800074 | orchestrator | 2025-04-14 01:08:25.800088 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-14 01:08:25.800103 | orchestrator | Monday 14 April 2025 01:07:04 +0000 (0:00:00.285) 0:00:42.613 ********** 2025-04-14 01:08:25.800116 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:08:25.800131 | orchestrator | 2025-04-14 01:08:25.800145 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-04-14 01:08:25.800159 | orchestrator | Monday 14 April 2025 01:07:05 +0000 (0:00:00.812) 0:00:43.425 ********** 2025-04-14 01:08:25.800173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.800218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.800235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.800257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.800272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.800286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.800301 | orchestrator | 2025-04-14 01:08:25.800315 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-04-14 01:08:25.800329 | orchestrator | Monday 14 April 2025 01:07:08 +0000 (0:00:03.014) 0:00:46.440 ********** 2025-04-14 01:08:25.800366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.800382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.800404 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:25.800418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.800433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.800448 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:25.800463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.800497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.800520 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:25.800558 | orchestrator | 2025-04-14 01:08:25.800573 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-04-14 01:08:25.800587 | orchestrator | Monday 14 April 2025 01:07:11 +0000 (0:00:02.307) 0:00:48.748 ********** 2025-04-14 01:08:25.800602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.800617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.800632 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:25.800647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.800680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.800697 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:25.800711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.800733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.800748 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:25.800762 | orchestrator | 2025-04-14 01:08:25.800776 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-04-14 01:08:25.800791 | orchestrator | Monday 14 April 2025 01:07:13 +0000 (0:00:02.936) 0:00:51.684 ********** 2025-04-14 01:08:25.800805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.800829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.800857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DU2025-04-14 01:08:25 | INFO  | Task 76ceba62-4722-42d7-8841-23271e5be829 is in state SUCCESS 2025-04-14 01:08:25.800881 | orchestrator | MMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.800897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.800912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.800937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.800953 | orchestrator | 2025-04-14 01:08:25.800967 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-04-14 01:08:25.800981 | orchestrator | Monday 14 April 2025 01:07:17 +0000 (0:00:03.426) 0:00:55.111 ********** 2025-04-14 01:08:25.801003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.801025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.801041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.801056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.801080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.801095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.801120 | orchestrator | 2025-04-14 01:08:25.801141 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-04-14 01:08:25.801156 | orchestrator | Monday 14 April 2025 01:07:28 +0000 (0:00:11.423) 0:01:06.534 ********** 2025-04-14 01:08:25.801170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.801185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.801200 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:25.801224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.801240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.801262 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:25.801284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-04-14 01:08:25.801300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:08:25.801314 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:25.801329 | orchestrator | 2025-04-14 01:08:25.801343 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-04-14 01:08:25.801357 | orchestrator | Monday 14 April 2025 01:07:31 +0000 (0:00:02.229) 0:01:08.764 ********** 2025-04-14 01:08:25.801372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.801396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.801412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-04-14 01:08:25.801441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.801457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.801472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:08:25.801486 | orchestrator | 2025-04-14 01:08:25.801500 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-04-14 01:08:25.801514 | orchestrator | Monday 14 April 2025 01:07:33 +0000 (0:00:02.839) 0:01:11.603 ********** 2025-04-14 01:08:25.801572 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:08:25.801591 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:08:25.801605 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:08:25.801619 | orchestrator | 2025-04-14 01:08:25.801633 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-04-14 01:08:25.801647 | orchestrator | Monday 14 April 2025 01:07:34 +0000 (0:00:00.317) 0:01:11.920 ********** 2025-04-14 01:08:25.801661 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:25.801675 | orchestrator | 2025-04-14 01:08:25.801689 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-04-14 01:08:25.801711 | orchestrator | Monday 14 April 2025 01:07:36 +0000 (0:00:02.668) 0:01:14.589 ********** 2025-04-14 01:08:25.801724 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:25.801748 | orchestrator | 2025-04-14 01:08:25.801772 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-04-14 01:08:25.801792 | orchestrator | Monday 14 April 2025 01:07:39 +0000 (0:00:02.396) 0:01:16.985 ********** 2025-04-14 01:08:25.801813 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:25.801832 | orchestrator | 2025-04-14 01:08:25.801852 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-14 01:08:25.801874 | orchestrator | Monday 14 April 2025 01:07:53 +0000 (0:00:14.121) 0:01:31.106 ********** 2025-04-14 01:08:25.801896 | orchestrator | 2025-04-14 01:08:25.801917 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-14 01:08:25.801940 | orchestrator | Monday 14 April 2025 01:07:53 +0000 (0:00:00.072) 0:01:31.179 ********** 2025-04-14 01:08:25.801963 | orchestrator | 2025-04-14 01:08:25.801985 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-04-14 01:08:25.802010 | orchestrator | Monday 14 April 2025 01:07:53 +0000 (0:00:00.192) 0:01:31.372 ********** 2025-04-14 01:08:25.802098 | orchestrator | 2025-04-14 01:08:25.802113 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-04-14 01:08:25.802129 | orchestrator | Monday 14 April 2025 01:07:53 +0000 (0:00:00.059) 0:01:31.431 ********** 2025-04-14 01:08:25.802153 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:25.802175 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:08:25.802198 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:08:25.802221 | orchestrator | 2025-04-14 01:08:25.802244 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-04-14 01:08:25.802269 | orchestrator | Monday 14 April 2025 01:08:13 +0000 (0:00:19.461) 0:01:50.893 ********** 2025-04-14 01:08:25.802285 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:08:25.802299 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:08:25.802327 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:08:28.854946 | orchestrator | 2025-04-14 01:08:28.855132 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:08:28.855178 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-14 01:08:28.855205 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:08:28.855231 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:08:28.855253 | orchestrator | 2025-04-14 01:08:28.855275 | orchestrator | 2025-04-14 01:08:28.855300 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:08:28.855320 | orchestrator | Monday 14 April 2025 01:08:24 +0000 (0:00:11.802) 0:02:02.695 ********** 2025-04-14 01:08:28.855342 | orchestrator | =============================================================================== 2025-04-14 01:08:28.855364 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.46s 2025-04-14 01:08:28.855385 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.12s 2025-04-14 01:08:28.855406 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.80s 2025-04-14 01:08:28.855427 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 11.42s 2025-04-14 01:08:28.855472 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.52s 2025-04-14 01:08:28.855495 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.35s 2025-04-14 01:08:28.855519 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.06s 2025-04-14 01:08:28.855571 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.78s 2025-04-14 01:08:28.855628 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.73s 2025-04-14 01:08:28.855651 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.52s 2025-04-14 01:08:28.855667 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.43s 2025-04-14 01:08:28.855681 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.40s 2025-04-14 01:08:28.855696 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.27s 2025-04-14 01:08:28.855710 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.25s 2025-04-14 01:08:28.855724 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.01s 2025-04-14 01:08:28.855738 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.94s 2025-04-14 01:08:28.855751 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.84s 2025-04-14 01:08:28.855765 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.67s 2025-04-14 01:08:28.855779 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.61s 2025-04-14 01:08:28.855793 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.40s 2025-04-14 01:08:28.855808 | orchestrator | 2025-04-14 01:08:25 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:28.855826 | orchestrator | 2025-04-14 01:08:25 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:28.855850 | orchestrator | 2025-04-14 01:08:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:28.855892 | orchestrator | 2025-04-14 01:08:28 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:28.856200 | orchestrator | 2025-04-14 01:08:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:28.857298 | orchestrator | 2025-04-14 01:08:28 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:28.858261 | orchestrator | 2025-04-14 01:08:28 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:28.859334 | orchestrator | 2025-04-14 01:08:28 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:31.903875 | orchestrator | 2025-04-14 01:08:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:31.903991 | orchestrator | 2025-04-14 01:08:31 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:31.906100 | orchestrator | 2025-04-14 01:08:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:31.906126 | orchestrator | 2025-04-14 01:08:31 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:31.906142 | orchestrator | 2025-04-14 01:08:31 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:31.908066 | orchestrator | 2025-04-14 01:08:31 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:31.908436 | orchestrator | 2025-04-14 01:08:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:34.959795 | orchestrator | 2025-04-14 01:08:34 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:34.963223 | orchestrator | 2025-04-14 01:08:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:34.963683 | orchestrator | 2025-04-14 01:08:34 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:34.964102 | orchestrator | 2025-04-14 01:08:34 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:34.964864 | orchestrator | 2025-04-14 01:08:34 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:38.013983 | orchestrator | 2025-04-14 01:08:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:38.014254 | orchestrator | 2025-04-14 01:08:38 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:38.014394 | orchestrator | 2025-04-14 01:08:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:38.015082 | orchestrator | 2025-04-14 01:08:38 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:38.015731 | orchestrator | 2025-04-14 01:08:38 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:38.016371 | orchestrator | 2025-04-14 01:08:38 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:41.068653 | orchestrator | 2025-04-14 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:41.068797 | orchestrator | 2025-04-14 01:08:41 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:41.073307 | orchestrator | 2025-04-14 01:08:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:41.076226 | orchestrator | 2025-04-14 01:08:41 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:41.076320 | orchestrator | 2025-04-14 01:08:41 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:41.079471 | orchestrator | 2025-04-14 01:08:41 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state STARTED 2025-04-14 01:08:41.079605 | orchestrator | 2025-04-14 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:44.130588 | orchestrator | 2025-04-14 01:08:44 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:44.131507 | orchestrator | 2025-04-14 01:08:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:44.132738 | orchestrator | 2025-04-14 01:08:44 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:44.134179 | orchestrator | 2025-04-14 01:08:44 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:44.135434 | orchestrator | 2025-04-14 01:08:44 | INFO  | Task 1b928e62-48ac-47b1-8f5a-d244a1a32186 is in state SUCCESS 2025-04-14 01:08:44.136758 | orchestrator | 2025-04-14 01:08:44 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:08:47.187041 | orchestrator | 2025-04-14 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:47.187186 | orchestrator | 2025-04-14 01:08:47 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:47.187392 | orchestrator | 2025-04-14 01:08:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:47.187986 | orchestrator | 2025-04-14 01:08:47 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:47.188918 | orchestrator | 2025-04-14 01:08:47 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:47.191920 | orchestrator | 2025-04-14 01:08:47 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:08:50.226974 | orchestrator | 2025-04-14 01:08:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:50.227108 | orchestrator | 2025-04-14 01:08:50 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:50.227402 | orchestrator | 2025-04-14 01:08:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:50.228396 | orchestrator | 2025-04-14 01:08:50 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:50.229074 | orchestrator | 2025-04-14 01:08:50 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:50.230158 | orchestrator | 2025-04-14 01:08:50 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:08:53.264848 | orchestrator | 2025-04-14 01:08:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:53.264996 | orchestrator | 2025-04-14 01:08:53 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:53.265262 | orchestrator | 2025-04-14 01:08:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:53.266184 | orchestrator | 2025-04-14 01:08:53 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:53.266972 | orchestrator | 2025-04-14 01:08:53 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:53.267790 | orchestrator | 2025-04-14 01:08:53 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:08:56.332815 | orchestrator | 2025-04-14 01:08:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:56.332949 | orchestrator | 2025-04-14 01:08:56 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:56.334252 | orchestrator | 2025-04-14 01:08:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:56.336171 | orchestrator | 2025-04-14 01:08:56 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:56.337433 | orchestrator | 2025-04-14 01:08:56 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:56.338622 | orchestrator | 2025-04-14 01:08:56 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:08:59.378152 | orchestrator | 2025-04-14 01:08:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:08:59.378304 | orchestrator | 2025-04-14 01:08:59 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:08:59.378400 | orchestrator | 2025-04-14 01:08:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:08:59.378422 | orchestrator | 2025-04-14 01:08:59 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:08:59.378442 | orchestrator | 2025-04-14 01:08:59 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:08:59.379472 | orchestrator | 2025-04-14 01:08:59 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:02.433916 | orchestrator | 2025-04-14 01:08:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:02.434124 | orchestrator | 2025-04-14 01:09:02 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:02.435121 | orchestrator | 2025-04-14 01:09:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:02.436239 | orchestrator | 2025-04-14 01:09:02 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:02.436269 | orchestrator | 2025-04-14 01:09:02 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:09:02.437203 | orchestrator | 2025-04-14 01:09:02 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:02.437350 | orchestrator | 2025-04-14 01:09:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:05.480424 | orchestrator | 2025-04-14 01:09:05 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:05.480699 | orchestrator | 2025-04-14 01:09:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:05.482764 | orchestrator | 2025-04-14 01:09:05 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:05.486254 | orchestrator | 2025-04-14 01:09:05 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state STARTED 2025-04-14 01:09:05.488834 | orchestrator | 2025-04-14 01:09:05 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:08.533720 | orchestrator | 2025-04-14 01:09:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:08.533860 | orchestrator | 2025-04-14 01:09:08 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:08.534581 | orchestrator | 2025-04-14 01:09:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:08.537283 | orchestrator | 2025-04-14 01:09:08 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:08.539807 | orchestrator | 2025-04-14 01:09:08 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:08.542520 | orchestrator | 2025-04-14 01:09:08 | INFO  | Task 66521145-ef0c-4bc2-af75-161822f38492 is in state SUCCESS 2025-04-14 01:09:08.542645 | orchestrator | 2025-04-14 01:09:08.542663 | orchestrator | 2025-04-14 01:09:08.542678 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:09:08.542693 | orchestrator | 2025-04-14 01:09:08.542707 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:09:08.542722 | orchestrator | Monday 14 April 2025 01:08:08 +0000 (0:00:00.327) 0:00:00.327 ********** 2025-04-14 01:09:08.542736 | orchestrator | ok: [testbed-manager] 2025-04-14 01:09:08.542752 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:09:08.542766 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:09:08.542780 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:09:08.542794 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:09:08.542808 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:09:08.542822 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:09:08.542836 | orchestrator | 2025-04-14 01:09:08.542850 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:09:08.542865 | orchestrator | Monday 14 April 2025 01:08:09 +0000 (0:00:00.939) 0:00:01.266 ********** 2025-04-14 01:09:08.542880 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-04-14 01:09:08.542895 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-04-14 01:09:08.542909 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-04-14 01:09:08.542923 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-04-14 01:09:08.542937 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-04-14 01:09:08.542968 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-04-14 01:09:08.542983 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-04-14 01:09:08.542997 | orchestrator | 2025-04-14 01:09:08.543011 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-04-14 01:09:08.543025 | orchestrator | 2025-04-14 01:09:08.543039 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-04-14 01:09:08.543053 | orchestrator | Monday 14 April 2025 01:08:10 +0000 (0:00:01.023) 0:00:02.290 ********** 2025-04-14 01:09:08.543068 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:09:08.543083 | orchestrator | 2025-04-14 01:09:08.543097 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-04-14 01:09:08.543135 | orchestrator | Monday 14 April 2025 01:08:11 +0000 (0:00:01.551) 0:00:03.841 ********** 2025-04-14 01:09:08.543150 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-04-14 01:09:08.543164 | orchestrator | 2025-04-14 01:09:08.543178 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-04-14 01:09:08.543192 | orchestrator | Monday 14 April 2025 01:08:15 +0000 (0:00:03.800) 0:00:07.641 ********** 2025-04-14 01:09:08.543207 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-04-14 01:09:08.543222 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-04-14 01:09:08.543236 | orchestrator | 2025-04-14 01:09:08.543250 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-04-14 01:09:08.543264 | orchestrator | Monday 14 April 2025 01:08:22 +0000 (0:00:06.987) 0:00:14.629 ********** 2025-04-14 01:09:08.543282 | orchestrator | ok: [testbed-manager] => (item=service) 2025-04-14 01:09:08.543298 | orchestrator | 2025-04-14 01:09:08.543315 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-04-14 01:09:08.543330 | orchestrator | Monday 14 April 2025 01:08:25 +0000 (0:00:03.238) 0:00:17.868 ********** 2025-04-14 01:09:08.543346 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:09:08.543362 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-04-14 01:09:08.543378 | orchestrator | 2025-04-14 01:09:08.543399 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-04-14 01:09:08.543416 | orchestrator | Monday 14 April 2025 01:08:29 +0000 (0:00:04.094) 0:00:21.963 ********** 2025-04-14 01:09:08.543432 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-04-14 01:09:08.543449 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-04-14 01:09:08.543466 | orchestrator | 2025-04-14 01:09:08.543503 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-04-14 01:09:08.543521 | orchestrator | Monday 14 April 2025 01:08:36 +0000 (0:00:06.635) 0:00:28.598 ********** 2025-04-14 01:09:08.543538 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-04-14 01:09:08.543554 | orchestrator | 2025-04-14 01:09:08.543570 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:09:08.543586 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:09:08.543615 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:09:08.543631 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:09:08.543645 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:09:08.543659 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:09:08.543684 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:09:08.547423 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:09:08.547465 | orchestrator | 2025-04-14 01:09:08.547520 | orchestrator | 2025-04-14 01:09:08.547546 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:09:08.547570 | orchestrator | Monday 14 April 2025 01:08:42 +0000 (0:00:05.460) 0:00:34.058 ********** 2025-04-14 01:09:08.547594 | orchestrator | =============================================================================== 2025-04-14 01:09:08.547638 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.99s 2025-04-14 01:09:08.547665 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.64s 2025-04-14 01:09:08.547692 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.46s 2025-04-14 01:09:08.547718 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.09s 2025-04-14 01:09:08.547744 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.80s 2025-04-14 01:09:08.547768 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.24s 2025-04-14 01:09:08.547793 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.55s 2025-04-14 01:09:08.547819 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.02s 2025-04-14 01:09:08.547845 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.94s 2025-04-14 01:09:08.547871 | orchestrator | 2025-04-14 01:09:08.547909 | orchestrator | 2025-04-14 01:09:08 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:11.594393 | orchestrator | 2025-04-14 01:09:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:11.594569 | orchestrator | 2025-04-14 01:09:11 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:11.598255 | orchestrator | 2025-04-14 01:09:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:11.600667 | orchestrator | 2025-04-14 01:09:11 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:11.604840 | orchestrator | 2025-04-14 01:09:11 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:11.608072 | orchestrator | 2025-04-14 01:09:11 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:11.609610 | orchestrator | 2025-04-14 01:09:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:14.657908 | orchestrator | 2025-04-14 01:09:14 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:14.658252 | orchestrator | 2025-04-14 01:09:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:14.659747 | orchestrator | 2025-04-14 01:09:14 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:14.661729 | orchestrator | 2025-04-14 01:09:14 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:14.662292 | orchestrator | 2025-04-14 01:09:14 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:17.707704 | orchestrator | 2025-04-14 01:09:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:17.707846 | orchestrator | 2025-04-14 01:09:17 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:17.710202 | orchestrator | 2025-04-14 01:09:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:17.711771 | orchestrator | 2025-04-14 01:09:17 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:17.713408 | orchestrator | 2025-04-14 01:09:17 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:17.717252 | orchestrator | 2025-04-14 01:09:17 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:20.781261 | orchestrator | 2025-04-14 01:09:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:20.781400 | orchestrator | 2025-04-14 01:09:20 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:20.782406 | orchestrator | 2025-04-14 01:09:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:20.786927 | orchestrator | 2025-04-14 01:09:20 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:20.787957 | orchestrator | 2025-04-14 01:09:20 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:20.790268 | orchestrator | 2025-04-14 01:09:20 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:23.835222 | orchestrator | 2025-04-14 01:09:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:23.835346 | orchestrator | 2025-04-14 01:09:23 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:23.838805 | orchestrator | 2025-04-14 01:09:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:23.838872 | orchestrator | 2025-04-14 01:09:23 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:23.838897 | orchestrator | 2025-04-14 01:09:23 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:23.839294 | orchestrator | 2025-04-14 01:09:23 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:26.894354 | orchestrator | 2025-04-14 01:09:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:26.894518 | orchestrator | 2025-04-14 01:09:26 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:26.894962 | orchestrator | 2025-04-14 01:09:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:26.895631 | orchestrator | 2025-04-14 01:09:26 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:26.896294 | orchestrator | 2025-04-14 01:09:26 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:26.896847 | orchestrator | 2025-04-14 01:09:26 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:26.897174 | orchestrator | 2025-04-14 01:09:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:29.940052 | orchestrator | 2025-04-14 01:09:29 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:29.942320 | orchestrator | 2025-04-14 01:09:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:29.942363 | orchestrator | 2025-04-14 01:09:29 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:29.942958 | orchestrator | 2025-04-14 01:09:29 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:29.943813 | orchestrator | 2025-04-14 01:09:29 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:32.980231 | orchestrator | 2025-04-14 01:09:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:32.980393 | orchestrator | 2025-04-14 01:09:32 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:32.980771 | orchestrator | 2025-04-14 01:09:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:32.982879 | orchestrator | 2025-04-14 01:09:32 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:32.986001 | orchestrator | 2025-04-14 01:09:32 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:32.986110 | orchestrator | 2025-04-14 01:09:32 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:36.024951 | orchestrator | 2025-04-14 01:09:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:36.025116 | orchestrator | 2025-04-14 01:09:36 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:36.030985 | orchestrator | 2025-04-14 01:09:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:36.031426 | orchestrator | 2025-04-14 01:09:36 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:36.032530 | orchestrator | 2025-04-14 01:09:36 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:36.033874 | orchestrator | 2025-04-14 01:09:36 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:39.060795 | orchestrator | 2025-04-14 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:39.060936 | orchestrator | 2025-04-14 01:09:39 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:42.102815 | orchestrator | 2025-04-14 01:09:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:42.103008 | orchestrator | 2025-04-14 01:09:39 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:42.103030 | orchestrator | 2025-04-14 01:09:39 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:42.103045 | orchestrator | 2025-04-14 01:09:39 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:42.103060 | orchestrator | 2025-04-14 01:09:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:42.103093 | orchestrator | 2025-04-14 01:09:42 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:42.103169 | orchestrator | 2025-04-14 01:09:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:42.103189 | orchestrator | 2025-04-14 01:09:42 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:42.103208 | orchestrator | 2025-04-14 01:09:42 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:42.106312 | orchestrator | 2025-04-14 01:09:42 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:45.142960 | orchestrator | 2025-04-14 01:09:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:45.143101 | orchestrator | 2025-04-14 01:09:45 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:45.143761 | orchestrator | 2025-04-14 01:09:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:45.144967 | orchestrator | 2025-04-14 01:09:45 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:45.145775 | orchestrator | 2025-04-14 01:09:45 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:45.146606 | orchestrator | 2025-04-14 01:09:45 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:48.188540 | orchestrator | 2025-04-14 01:09:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:48.188679 | orchestrator | 2025-04-14 01:09:48 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:48.191371 | orchestrator | 2025-04-14 01:09:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:48.192037 | orchestrator | 2025-04-14 01:09:48 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:48.192710 | orchestrator | 2025-04-14 01:09:48 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:48.194341 | orchestrator | 2025-04-14 01:09:48 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:51.230813 | orchestrator | 2025-04-14 01:09:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:51.230970 | orchestrator | 2025-04-14 01:09:51 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:51.231390 | orchestrator | 2025-04-14 01:09:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:51.231450 | orchestrator | 2025-04-14 01:09:51 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:51.232375 | orchestrator | 2025-04-14 01:09:51 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:51.232549 | orchestrator | 2025-04-14 01:09:51 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:54.264168 | orchestrator | 2025-04-14 01:09:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:54.264312 | orchestrator | 2025-04-14 01:09:54 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:54.264855 | orchestrator | 2025-04-14 01:09:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:54.266494 | orchestrator | 2025-04-14 01:09:54 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:54.267606 | orchestrator | 2025-04-14 01:09:54 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:54.268276 | orchestrator | 2025-04-14 01:09:54 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:09:57.322188 | orchestrator | 2025-04-14 01:09:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:09:57.322326 | orchestrator | 2025-04-14 01:09:57 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:09:57.324075 | orchestrator | 2025-04-14 01:09:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:09:57.324795 | orchestrator | 2025-04-14 01:09:57 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:09:57.324822 | orchestrator | 2025-04-14 01:09:57 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:09:57.324841 | orchestrator | 2025-04-14 01:09:57 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:00.367878 | orchestrator | 2025-04-14 01:09:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:00.368023 | orchestrator | 2025-04-14 01:10:00 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:00.368805 | orchestrator | 2025-04-14 01:10:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:00.369690 | orchestrator | 2025-04-14 01:10:00 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:00.370738 | orchestrator | 2025-04-14 01:10:00 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:00.372246 | orchestrator | 2025-04-14 01:10:00 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:03.417796 | orchestrator | 2025-04-14 01:10:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:03.417930 | orchestrator | 2025-04-14 01:10:03 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:06.457226 | orchestrator | 2025-04-14 01:10:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:06.457392 | orchestrator | 2025-04-14 01:10:03 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:06.457511 | orchestrator | 2025-04-14 01:10:03 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:06.457538 | orchestrator | 2025-04-14 01:10:03 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:06.457564 | orchestrator | 2025-04-14 01:10:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:06.457630 | orchestrator | 2025-04-14 01:10:06 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:06.458508 | orchestrator | 2025-04-14 01:10:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:06.459328 | orchestrator | 2025-04-14 01:10:06 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:06.459371 | orchestrator | 2025-04-14 01:10:06 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:06.462991 | orchestrator | 2025-04-14 01:10:06 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:09.492101 | orchestrator | 2025-04-14 01:10:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:09.492229 | orchestrator | 2025-04-14 01:10:09 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:09.495826 | orchestrator | 2025-04-14 01:10:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:09.496307 | orchestrator | 2025-04-14 01:10:09 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:09.497194 | orchestrator | 2025-04-14 01:10:09 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:09.497732 | orchestrator | 2025-04-14 01:10:09 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:09.497944 | orchestrator | 2025-04-14 01:10:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:12.537808 | orchestrator | 2025-04-14 01:10:12 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:12.538679 | orchestrator | 2025-04-14 01:10:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:12.538828 | orchestrator | 2025-04-14 01:10:12 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:12.539628 | orchestrator | 2025-04-14 01:10:12 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:12.540006 | orchestrator | 2025-04-14 01:10:12 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:15.571061 | orchestrator | 2025-04-14 01:10:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:15.571256 | orchestrator | 2025-04-14 01:10:15 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:15.571350 | orchestrator | 2025-04-14 01:10:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:15.572182 | orchestrator | 2025-04-14 01:10:15 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:15.572524 | orchestrator | 2025-04-14 01:10:15 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:15.573181 | orchestrator | 2025-04-14 01:10:15 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:18.617997 | orchestrator | 2025-04-14 01:10:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:18.618194 | orchestrator | 2025-04-14 01:10:18 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:18.618289 | orchestrator | 2025-04-14 01:10:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:18.620288 | orchestrator | 2025-04-14 01:10:18 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:18.620761 | orchestrator | 2025-04-14 01:10:18 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:18.621574 | orchestrator | 2025-04-14 01:10:18 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:21.660885 | orchestrator | 2025-04-14 01:10:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:21.661045 | orchestrator | 2025-04-14 01:10:21 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:21.661674 | orchestrator | 2025-04-14 01:10:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:21.661717 | orchestrator | 2025-04-14 01:10:21 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:21.662222 | orchestrator | 2025-04-14 01:10:21 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:21.663220 | orchestrator | 2025-04-14 01:10:21 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:24.715483 | orchestrator | 2025-04-14 01:10:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:24.715643 | orchestrator | 2025-04-14 01:10:24 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:24.716285 | orchestrator | 2025-04-14 01:10:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:24.716521 | orchestrator | 2025-04-14 01:10:24 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:24.716616 | orchestrator | 2025-04-14 01:10:24 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:24.717413 | orchestrator | 2025-04-14 01:10:24 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:27.752963 | orchestrator | 2025-04-14 01:10:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:27.753193 | orchestrator | 2025-04-14 01:10:27 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:27.753288 | orchestrator | 2025-04-14 01:10:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:27.753728 | orchestrator | 2025-04-14 01:10:27 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:27.754291 | orchestrator | 2025-04-14 01:10:27 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:27.755646 | orchestrator | 2025-04-14 01:10:27 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:27.755767 | orchestrator | 2025-04-14 01:10:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:30.793857 | orchestrator | 2025-04-14 01:10:30 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:30.794919 | orchestrator | 2025-04-14 01:10:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:30.795103 | orchestrator | 2025-04-14 01:10:30 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:30.795950 | orchestrator | 2025-04-14 01:10:30 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:30.796652 | orchestrator | 2025-04-14 01:10:30 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:30.796736 | orchestrator | 2025-04-14 01:10:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:33.839420 | orchestrator | 2025-04-14 01:10:33 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:33.841687 | orchestrator | 2025-04-14 01:10:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:33.841750 | orchestrator | 2025-04-14 01:10:33 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:33.842346 | orchestrator | 2025-04-14 01:10:33 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:33.843168 | orchestrator | 2025-04-14 01:10:33 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:36.879699 | orchestrator | 2025-04-14 01:10:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:36.879844 | orchestrator | 2025-04-14 01:10:36 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:36.879951 | orchestrator | 2025-04-14 01:10:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:36.879978 | orchestrator | 2025-04-14 01:10:36 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:36.881037 | orchestrator | 2025-04-14 01:10:36 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:36.882148 | orchestrator | 2025-04-14 01:10:36 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:36.882284 | orchestrator | 2025-04-14 01:10:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:39.928284 | orchestrator | 2025-04-14 01:10:39 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:39.928819 | orchestrator | 2025-04-14 01:10:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:39.928845 | orchestrator | 2025-04-14 01:10:39 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:39.930843 | orchestrator | 2025-04-14 01:10:39 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:39.931609 | orchestrator | 2025-04-14 01:10:39 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:39.931685 | orchestrator | 2025-04-14 01:10:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:42.986518 | orchestrator | 2025-04-14 01:10:42 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:42.991785 | orchestrator | 2025-04-14 01:10:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:42.996802 | orchestrator | 2025-04-14 01:10:42 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:43.002520 | orchestrator | 2025-04-14 01:10:43 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:43.005818 | orchestrator | 2025-04-14 01:10:43 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:46.067182 | orchestrator | 2025-04-14 01:10:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:46.067325 | orchestrator | 2025-04-14 01:10:46 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:46.067550 | orchestrator | 2025-04-14 01:10:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:46.067584 | orchestrator | 2025-04-14 01:10:46 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:46.068972 | orchestrator | 2025-04-14 01:10:46 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:46.069798 | orchestrator | 2025-04-14 01:10:46 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:46.069918 | orchestrator | 2025-04-14 01:10:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:49.133088 | orchestrator | 2025-04-14 01:10:49 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:49.134251 | orchestrator | 2025-04-14 01:10:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:49.135476 | orchestrator | 2025-04-14 01:10:49 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:49.140425 | orchestrator | 2025-04-14 01:10:49 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:49.140566 | orchestrator | 2025-04-14 01:10:49 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:49.140615 | orchestrator | 2025-04-14 01:10:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:52.192630 | orchestrator | 2025-04-14 01:10:52 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:52.193407 | orchestrator | 2025-04-14 01:10:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:52.196420 | orchestrator | 2025-04-14 01:10:52 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:52.197333 | orchestrator | 2025-04-14 01:10:52 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:52.197425 | orchestrator | 2025-04-14 01:10:52 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:55.234331 | orchestrator | 2025-04-14 01:10:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:55.234501 | orchestrator | 2025-04-14 01:10:55 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:55.236567 | orchestrator | 2025-04-14 01:10:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:55.237183 | orchestrator | 2025-04-14 01:10:55 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:55.239267 | orchestrator | 2025-04-14 01:10:55 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:55.240153 | orchestrator | 2025-04-14 01:10:55 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:55.240467 | orchestrator | 2025-04-14 01:10:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:10:58.276182 | orchestrator | 2025-04-14 01:10:58 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:10:58.277929 | orchestrator | 2025-04-14 01:10:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:10:58.279808 | orchestrator | 2025-04-14 01:10:58 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:10:58.282081 | orchestrator | 2025-04-14 01:10:58 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:10:58.284147 | orchestrator | 2025-04-14 01:10:58 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:10:58.284534 | orchestrator | 2025-04-14 01:10:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:01.338739 | orchestrator | 2025-04-14 01:11:01 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:01.339304 | orchestrator | 2025-04-14 01:11:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:01.339454 | orchestrator | 2025-04-14 01:11:01 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:01.340199 | orchestrator | 2025-04-14 01:11:01 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:01.340851 | orchestrator | 2025-04-14 01:11:01 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:04.388091 | orchestrator | 2025-04-14 01:11:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:04.388184 | orchestrator | 2025-04-14 01:11:04 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:04.389708 | orchestrator | 2025-04-14 01:11:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:04.392301 | orchestrator | 2025-04-14 01:11:04 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:04.395611 | orchestrator | 2025-04-14 01:11:04 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:04.397923 | orchestrator | 2025-04-14 01:11:04 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:07.461975 | orchestrator | 2025-04-14 01:11:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:07.462213 | orchestrator | 2025-04-14 01:11:07 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:07.462972 | orchestrator | 2025-04-14 01:11:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:07.464389 | orchestrator | 2025-04-14 01:11:07 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:07.465251 | orchestrator | 2025-04-14 01:11:07 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:07.466932 | orchestrator | 2025-04-14 01:11:07 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:10.523194 | orchestrator | 2025-04-14 01:11:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:10.523389 | orchestrator | 2025-04-14 01:11:10 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:10.525253 | orchestrator | 2025-04-14 01:11:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:10.528149 | orchestrator | 2025-04-14 01:11:10 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:10.529200 | orchestrator | 2025-04-14 01:11:10 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:10.529247 | orchestrator | 2025-04-14 01:11:10 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:10.529546 | orchestrator | 2025-04-14 01:11:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:13.581865 | orchestrator | 2025-04-14 01:11:13 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:13.585480 | orchestrator | 2025-04-14 01:11:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:13.587564 | orchestrator | 2025-04-14 01:11:13 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:13.589950 | orchestrator | 2025-04-14 01:11:13 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:13.591829 | orchestrator | 2025-04-14 01:11:13 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:13.591955 | orchestrator | 2025-04-14 01:11:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:16.643248 | orchestrator | 2025-04-14 01:11:16 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:16.644070 | orchestrator | 2025-04-14 01:11:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:16.644900 | orchestrator | 2025-04-14 01:11:16 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:16.646112 | orchestrator | 2025-04-14 01:11:16 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:16.646358 | orchestrator | 2025-04-14 01:11:16 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:16.646481 | orchestrator | 2025-04-14 01:11:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:19.695147 | orchestrator | 2025-04-14 01:11:19 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:19.696543 | orchestrator | 2025-04-14 01:11:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:19.697275 | orchestrator | 2025-04-14 01:11:19 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:19.698532 | orchestrator | 2025-04-14 01:11:19 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:19.699428 | orchestrator | 2025-04-14 01:11:19 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:22.748866 | orchestrator | 2025-04-14 01:11:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:22.749004 | orchestrator | 2025-04-14 01:11:22 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:22.750553 | orchestrator | 2025-04-14 01:11:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:22.753478 | orchestrator | 2025-04-14 01:11:22 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:22.755702 | orchestrator | 2025-04-14 01:11:22 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:22.757922 | orchestrator | 2025-04-14 01:11:22 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:25.812627 | orchestrator | 2025-04-14 01:11:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:25.812766 | orchestrator | 2025-04-14 01:11:25 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state STARTED 2025-04-14 01:11:25.813068 | orchestrator | 2025-04-14 01:11:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:25.815830 | orchestrator | 2025-04-14 01:11:25 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:25.816873 | orchestrator | 2025-04-14 01:11:25 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:25.818740 | orchestrator | 2025-04-14 01:11:25 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:25.818819 | orchestrator | 2025-04-14 01:11:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:28.872077 | orchestrator | 2025-04-14 01:11:28 | INFO  | Task ce30e165-8d29-416d-8b9e-293fa77d28fc is in state SUCCESS 2025-04-14 01:11:28.873897 | orchestrator | 2025-04-14 01:11:28.873986 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-04-14 01:11:28.874007 | orchestrator | 2025-04-14 01:11:28.874073 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-04-14 01:11:28.874097 | orchestrator | Monday 14 April 2025 01:03:01 +0000 (0:00:00.276) 0:00:00.276 ********** 2025-04-14 01:11:28.874149 | orchestrator | changed: [localhost] 2025-04-14 01:11:28.874471 | orchestrator | 2025-04-14 01:11:28.874512 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-04-14 01:11:28.874572 | orchestrator | Monday 14 April 2025 01:03:02 +0000 (0:00:00.573) 0:00:00.849 ********** 2025-04-14 01:11:28.874599 | orchestrator | 2025-04-14 01:11:28.874626 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-14 01:11:28.874652 | orchestrator | 2025-04-14 01:11:28.874672 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-14 01:11:28.874688 | orchestrator | 2025-04-14 01:11:28.874704 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-14 01:11:28.874720 | orchestrator | 2025-04-14 01:11:28.874736 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-14 01:11:28.874751 | orchestrator | 2025-04-14 01:11:28.874765 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-14 01:11:28.874779 | orchestrator | 2025-04-14 01:11:28.874793 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-14 01:11:28.874807 | orchestrator | 2025-04-14 01:11:28.874821 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-04-14 01:11:28.874835 | orchestrator | changed: [localhost] 2025-04-14 01:11:28.874849 | orchestrator | 2025-04-14 01:11:28.874863 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-04-14 01:11:28.874877 | orchestrator | Monday 14 April 2025 01:08:50 +0000 (0:05:48.387) 0:05:49.237 ********** 2025-04-14 01:11:28.874891 | orchestrator | changed: [localhost] 2025-04-14 01:11:28.874905 | orchestrator | 2025-04-14 01:11:28.874934 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:11:28.874949 | orchestrator | 2025-04-14 01:11:28.874970 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:11:28.874992 | orchestrator | Monday 14 April 2025 01:09:03 +0000 (0:00:13.015) 0:06:02.252 ********** 2025-04-14 01:11:28.875016 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:11:28.875038 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:11:28.875061 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:11:28.875085 | orchestrator | 2025-04-14 01:11:28.875109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:11:28.875131 | orchestrator | Monday 14 April 2025 01:09:04 +0000 (0:00:00.471) 0:06:02.724 ********** 2025-04-14 01:11:28.875157 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-04-14 01:11:28.875174 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-04-14 01:11:28.875188 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-04-14 01:11:28.875202 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-04-14 01:11:28.875245 | orchestrator | 2025-04-14 01:11:28.875260 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-04-14 01:11:28.875274 | orchestrator | skipping: no hosts matched 2025-04-14 01:11:28.875297 | orchestrator | 2025-04-14 01:11:28.875345 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:11:28.875361 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:11:28.875378 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:11:28.875394 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:11:28.875409 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:11:28.875423 | orchestrator | 2025-04-14 01:11:28.875437 | orchestrator | 2025-04-14 01:11:28.875451 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:11:28.875478 | orchestrator | Monday 14 April 2025 01:09:05 +0000 (0:00:00.700) 0:06:03.425 ********** 2025-04-14 01:11:28.875493 | orchestrator | =============================================================================== 2025-04-14 01:11:28.875506 | orchestrator | Download ironic-agent initramfs --------------------------------------- 348.39s 2025-04-14 01:11:28.875520 | orchestrator | Download ironic-agent kernel ------------------------------------------- 13.02s 2025-04-14 01:11:28.875772 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-04-14 01:11:28.875805 | orchestrator | Ensure the destination directory exists --------------------------------- 0.57s 2025-04-14 01:11:28.875832 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-04-14 01:11:28.875859 | orchestrator | 2025-04-14 01:11:28.875882 | orchestrator | 2025-04-14 01:11:28.875897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:11:28.875910 | orchestrator | 2025-04-14 01:11:28.875924 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:11:28.875938 | orchestrator | Monday 14 April 2025 01:06:37 +0000 (0:00:00.347) 0:00:00.347 ********** 2025-04-14 01:11:28.875952 | orchestrator | ok: [testbed-manager] 2025-04-14 01:11:28.875967 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:11:28.875981 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:11:28.875995 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:11:28.876009 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:11:28.876023 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:11:28.876037 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:11:28.876051 | orchestrator | 2025-04-14 01:11:28.876065 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:11:28.876079 | orchestrator | Monday 14 April 2025 01:06:38 +0000 (0:00:00.920) 0:00:01.268 ********** 2025-04-14 01:11:28.876111 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-04-14 01:11:28.876129 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-04-14 01:11:28.876161 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-04-14 01:11:28.876175 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-04-14 01:11:28.876190 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-04-14 01:11:28.876205 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-04-14 01:11:28.876219 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-04-14 01:11:28.876233 | orchestrator | 2025-04-14 01:11:28.876247 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-04-14 01:11:28.876261 | orchestrator | 2025-04-14 01:11:28.876275 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-14 01:11:28.876336 | orchestrator | Monday 14 April 2025 01:06:39 +0000 (0:00:01.000) 0:00:02.268 ********** 2025-04-14 01:11:28.876352 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:11:28.876369 | orchestrator | 2025-04-14 01:11:28.876567 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-04-14 01:11:28.876598 | orchestrator | Monday 14 April 2025 01:06:41 +0000 (0:00:01.603) 0:00:03.872 ********** 2025-04-14 01:11:28.876667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.876714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.876739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.876775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.876821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.876849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.876875 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-14 01:11:28.876901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.876916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.876952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.876968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.876983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.877054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.877110 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.877132 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.877162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.877185 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.877237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877276 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.877292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.877408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.877427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.877637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.877656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.877679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.877709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.877725 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.877754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.877793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.879180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.879263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.879279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.879398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.879414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.879507 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-14 01:11:28.879522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.879562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.879585 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.879616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.879631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.879652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.879685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.879704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.879716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.879737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.879762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.879780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.879792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.879804 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.879827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.879863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.879892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.879916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.879939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.879984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.879996 | orchestrator | 2025-04-14 01:11:28.880008 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-04-14 01:11:28.880020 | orchestrator | Monday 14 April 2025 01:06:45 +0000 (0:00:03.922) 0:00:07.794 ********** 2025-04-14 01:11:28.880031 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:11:28.880043 | orchestrator | 2025-04-14 01:11:28.880054 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-04-14 01:11:28.880065 | orchestrator | Monday 14 April 2025 01:06:47 +0000 (0:00:01.762) 0:00:09.557 ********** 2025-04-14 01:11:28.880077 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-14 01:11:28.880089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.880101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.880114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.880125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.880145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.880156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.880167 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.880177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880249 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880268 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880385 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-14 01:11:28.880397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.880430 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.880483 | orchestrator | 2025-04-14 01:11:28.880494 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-04-14 01:11:28.880504 | orchestrator | Monday 14 April 2025 01:06:52 +0000 (0:00:05.808) 0:00:15.366 ********** 2025-04-14 01:11:28.880523 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.880534 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.880545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880561 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.880572 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.880604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880635 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.880646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880675 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.880686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.880696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880752 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.880763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.880773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.880825 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.880839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.880858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880880 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.880891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.880906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.880927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.880976 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.880986 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.880997 | orchestrator | 2025-04-14 01:11:28.881007 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-04-14 01:11:28.881018 | orchestrator | Monday 14 April 2025 01:06:55 +0000 (0:00:02.888) 0:00:18.255 ********** 2025-04-14 01:11:28.881028 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.881044 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.881055 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881066 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.881090 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.881112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881160 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.881170 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.881181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.881191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881379 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.881395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.881414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881515 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.881533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.881551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881572 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.881587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.881598 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881629 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.881640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-04-14 01:11:28.881656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.881683 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.881693 | orchestrator | 2025-04-14 01:11:28.881703 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-04-14 01:11:28.881714 | orchestrator | Monday 14 April 2025 01:06:59 +0000 (0:00:03.927) 0:00:22.182 ********** 2025-04-14 01:11:28.881724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.881736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.881755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.881771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.881790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.881801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.881820 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-14 01:11:28.881831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.881842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.881857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.881873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.881884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.881943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.881967 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.881983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.882000 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882084 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.882207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.882220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882278 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.882370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.882384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882416 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.882487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.882496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882537 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882568 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-14 01:11:28.882577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.882586 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.882688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.882702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.882721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.882730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.882750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.882764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.882773 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.882791 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.882842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.882878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.882916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.882946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.882956 | orchestrator | 2025-04-14 01:11:28.882970 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-04-14 01:11:28.882984 | orchestrator | Monday 14 April 2025 01:07:07 +0000 (0:00:07.548) 0:00:29.730 ********** 2025-04-14 01:11:28.882999 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 01:11:28.883012 | orchestrator | 2025-04-14 01:11:28.883026 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-04-14 01:11:28.883039 | orchestrator | Monday 14 April 2025 01:07:07 +0000 (0:00:00.614) 0:00:30.345 ********** 2025-04-14 01:11:28.883053 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1067069, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883067 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1067069, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883089 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1067069, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883106 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1067069, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883132 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1067069, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883149 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1067069, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883158 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1067075, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883167 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1067075, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883176 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1067075, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883190 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1067069, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.883206 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1067075, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883216 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1067075, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883225 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1067075, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883238 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1067070, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883247 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1067070, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883257 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1067070, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883271 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1067070, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883290 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1067070, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883300 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1067070, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883324 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1067074, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3833911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883339 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1067074, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3833911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883349 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1067074, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3833911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883365 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1067074, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3833911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883380 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1067074, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3833911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883389 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1067074, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3833911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883398 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1067098, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883407 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1067098, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883420 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1067075, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.883430 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1067098, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883451 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1067098, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883461 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1067079, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883470 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1067098, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883479 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1067079, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883488 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1067098, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883501 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1067079, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883510 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1067079, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883531 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1067079, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883541 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1067073, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883551 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1067073, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883561 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1067073, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883569 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1067079, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883582 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1067073, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883591 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1067076, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883611 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1067070, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.381391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.883621 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1067073, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883630 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1067073, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883639 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1067076, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883648 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1067076, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883662 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1067076, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883671 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1067096, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883692 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1067076, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883702 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1067076, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883711 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1067096, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883720 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1067096, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883729 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1067096, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883742 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1067096, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883763 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1067071, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883773 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1067071, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883782 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1067071, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883791 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1067071, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883800 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1067096, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883808 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1067074, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3833911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.883822 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1067083, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883891 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.883910 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1067071, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883919 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1067083, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883928 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.883937 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1067083, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883946 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.883955 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1067083, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883963 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.883972 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1067071, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883981 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1067083, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.883997 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.884019 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1067083, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-04-14 01:11:28.884029 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.884038 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1067098, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.884047 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1067079, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.884056 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1067073, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.884066 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1067076, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.384391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.884075 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1067096, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3913913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.884084 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1067071, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.382391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.884107 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1067083, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3883913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-04-14 01:11:28.884117 | orchestrator | 2025-04-14 01:11:28.884126 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-04-14 01:11:28.884135 | orchestrator | Monday 14 April 2025 01:07:51 +0000 (0:00:44.083) 0:01:14.429 ********** 2025-04-14 01:11:28.884143 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 01:11:28.884152 | orchestrator | 2025-04-14 01:11:28.884161 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-04-14 01:11:28.884169 | orchestrator | Monday 14 April 2025 01:07:52 +0000 (0:00:00.418) 0:01:14.847 ********** 2025-04-14 01:11:28.884178 | orchestrator | [WARNING]: Skipped 2025-04-14 01:11:28.884187 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884196 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-04-14 01:11:28.884204 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884213 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-04-14 01:11:28.884221 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 01:11:28.884230 | orchestrator | [WARNING]: Skipped 2025-04-14 01:11:28.884239 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884247 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-04-14 01:11:28.884256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884264 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-04-14 01:11:28.884273 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:11:28.884282 | orchestrator | [WARNING]: Skipped 2025-04-14 01:11:28.884291 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884299 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-04-14 01:11:28.884308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884331 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-04-14 01:11:28.884340 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-04-14 01:11:28.884349 | orchestrator | [WARNING]: Skipped 2025-04-14 01:11:28.884358 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884366 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-04-14 01:11:28.884375 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884384 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-04-14 01:11:28.884392 | orchestrator | [WARNING]: Skipped 2025-04-14 01:11:28.884401 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884409 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-04-14 01:11:28.884418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884431 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-04-14 01:11:28.884440 | orchestrator | [WARNING]: Skipped 2025-04-14 01:11:28.884448 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884457 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-04-14 01:11:28.884466 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884474 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-04-14 01:11:28.884482 | orchestrator | [WARNING]: Skipped 2025-04-14 01:11:28.884491 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884500 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-04-14 01:11:28.884508 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-04-14 01:11:28.884516 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-04-14 01:11:28.884525 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-04-14 01:11:28.884534 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-14 01:11:28.884542 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-14 01:11:28.884551 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-14 01:11:28.884560 | orchestrator | 2025-04-14 01:11:28.884568 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-04-14 01:11:28.884577 | orchestrator | Monday 14 April 2025 01:07:53 +0000 (0:00:01.434) 0:01:16.282 ********** 2025-04-14 01:11:28.884585 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-14 01:11:28.884594 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-14 01:11:28.884603 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.884611 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.884620 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-14 01:11:28.884628 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.884640 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-14 01:11:28.884650 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.884658 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-14 01:11:28.884667 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.884676 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-04-14 01:11:28.884684 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.884693 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-04-14 01:11:28.884702 | orchestrator | 2025-04-14 01:11:28.884710 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-04-14 01:11:28.884719 | orchestrator | Monday 14 April 2025 01:08:14 +0000 (0:00:20.418) 0:01:36.700 ********** 2025-04-14 01:11:28.884728 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-14 01:11:28.884736 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.884745 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-14 01:11:28.884754 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.884762 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-14 01:11:28.884771 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.884780 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-14 01:11:28.884789 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.884797 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-14 01:11:28.884810 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.884819 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-04-14 01:11:28.884827 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.884836 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-04-14 01:11:28.884845 | orchestrator | 2025-04-14 01:11:28.884854 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-04-14 01:11:28.884862 | orchestrator | Monday 14 April 2025 01:08:22 +0000 (0:00:07.914) 0:01:44.614 ********** 2025-04-14 01:11:28.884871 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-14 01:11:28.884880 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.884889 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-14 01:11:28.884897 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.884906 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-14 01:11:28.884915 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.884924 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-14 01:11:28.884933 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.884941 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-14 01:11:28.884950 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.884959 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-04-14 01:11:28.884967 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.884979 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-04-14 01:11:28.884994 | orchestrator | 2025-04-14 01:11:28.885012 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-04-14 01:11:28.885026 | orchestrator | Monday 14 April 2025 01:08:26 +0000 (0:00:04.587) 0:01:49.202 ********** 2025-04-14 01:11:28.885039 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 01:11:28.885053 | orchestrator | 2025-04-14 01:11:28.885066 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-04-14 01:11:28.885080 | orchestrator | Monday 14 April 2025 01:08:27 +0000 (0:00:00.636) 0:01:49.839 ********** 2025-04-14 01:11:28.885095 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.885108 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.885122 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.885131 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.885140 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.885148 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.885157 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.885165 | orchestrator | 2025-04-14 01:11:28.885174 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-04-14 01:11:28.885182 | orchestrator | Monday 14 April 2025 01:08:28 +0000 (0:00:00.836) 0:01:50.675 ********** 2025-04-14 01:11:28.885191 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.885199 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.885208 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.885216 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.885224 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:11:28.885233 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:11:28.885241 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:11:28.885250 | orchestrator | 2025-04-14 01:11:28.885262 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-04-14 01:11:28.885277 | orchestrator | Monday 14 April 2025 01:08:32 +0000 (0:00:04.671) 0:01:55.346 ********** 2025-04-14 01:11:28.885286 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-14 01:11:28.885294 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.885307 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-14 01:11:28.885335 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.885349 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-14 01:11:28.885358 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.885367 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-14 01:11:28.885376 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.885385 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-14 01:11:28.885394 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.885403 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-14 01:11:28.885411 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.885420 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-04-14 01:11:28.885428 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.885436 | orchestrator | 2025-04-14 01:11:28.885445 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-04-14 01:11:28.885453 | orchestrator | Monday 14 April 2025 01:08:36 +0000 (0:00:03.564) 0:01:58.911 ********** 2025-04-14 01:11:28.885462 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-14 01:11:28.885470 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.885479 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-14 01:11:28.885488 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.885496 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-14 01:11:28.885505 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-14 01:11:28.885513 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.885522 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.885530 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-14 01:11:28.885539 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.885547 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-04-14 01:11:28.885556 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.885564 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-04-14 01:11:28.885573 | orchestrator | 2025-04-14 01:11:28.885581 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-04-14 01:11:28.885590 | orchestrator | Monday 14 April 2025 01:08:40 +0000 (0:00:04.360) 0:02:03.272 ********** 2025-04-14 01:11:28.885598 | orchestrator | [WARNING]: Skipped 2025-04-14 01:11:28.885607 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-04-14 01:11:28.885616 | orchestrator | due to this access issue: 2025-04-14 01:11:28.885624 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-04-14 01:11:28.885632 | orchestrator | not a directory 2025-04-14 01:11:28.885641 | orchestrator | ok: [testbed-manager -> localhost] 2025-04-14 01:11:28.885649 | orchestrator | 2025-04-14 01:11:28.885658 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-04-14 01:11:28.885671 | orchestrator | Monday 14 April 2025 01:08:43 +0000 (0:00:02.213) 0:02:05.486 ********** 2025-04-14 01:11:28.885680 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.885688 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.885697 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.885705 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.885714 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.885722 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.885730 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.885739 | orchestrator | 2025-04-14 01:11:28.885747 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-04-14 01:11:28.885756 | orchestrator | Monday 14 April 2025 01:08:44 +0000 (0:00:01.160) 0:02:06.646 ********** 2025-04-14 01:11:28.885764 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.885773 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.885781 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.885790 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.885798 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.885806 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.885815 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.885823 | orchestrator | 2025-04-14 01:11:28.885832 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-04-14 01:11:28.885840 | orchestrator | Monday 14 April 2025 01:08:45 +0000 (0:00:00.951) 0:02:07.597 ********** 2025-04-14 01:11:28.885849 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-14 01:11:28.885857 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.885866 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-14 01:11:28.885878 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.885887 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-14 01:11:28.885896 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.885904 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-14 01:11:28.885913 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.885921 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-14 01:11:28.885930 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.885938 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-14 01:11:28.885947 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.885955 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-04-14 01:11:28.885964 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.885972 | orchestrator | 2025-04-14 01:11:28.885981 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-04-14 01:11:28.885989 | orchestrator | Monday 14 April 2025 01:08:49 +0000 (0:00:03.976) 0:02:11.574 ********** 2025-04-14 01:11:28.885998 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-14 01:11:28.886006 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:11:28.886037 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-14 01:11:28.886048 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:11:28.886057 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-14 01:11:28.886066 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:11:28.886075 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-14 01:11:28.886083 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:11:28.886096 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-14 01:11:28.886105 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-14 01:11:28.886114 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:11:28.886122 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:11:28.886131 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-04-14 01:11:28.886140 | orchestrator | skipping: [testbed-manager] 2025-04-14 01:11:28.886148 | orchestrator | 2025-04-14 01:11:28.886157 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-04-14 01:11:28.886166 | orchestrator | Monday 14 April 2025 01:08:53 +0000 (0:00:03.951) 0:02:15.525 ********** 2025-04-14 01:11:28.886176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.886186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.886199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.886209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.886232 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-04-14 01:11:28.886241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.886251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.886260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.886273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-04-14 01:11:28.886289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.886306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.886330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886349 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.886357 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886370 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886379 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.886388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.886427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.886436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-04-14 01:11:28.886445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.886488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.886506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.886516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886545 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.886577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.886587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.886629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.886638 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.886656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.886665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.886708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886750 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-04-14 01:11:28.886768 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.886778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.886812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.886829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.886848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.886857 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-04-14 01:11:28.886902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-04-14 01:11:28.886911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-04-14 01:11:28.886920 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.886938 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.886948 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.886973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.886983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.886992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.887001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.887009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.887031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.887047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.887068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-04-14 01:11:28.887087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.887102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-04-14 01:11:28.887117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'index.docker.io/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-04-14 01:11:28.887132 | orchestrator | 2025-04-14 01:11:28.887146 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-04-14 01:11:28.887160 | orchestrator | Monday 14 April 2025 01:08:58 +0000 (0:00:05.136) 0:02:20.661 ********** 2025-04-14 01:11:28.887173 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-04-14 01:11:28.887182 | orchestrator | 2025-04-14 01:11:28.887191 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-14 01:11:28.887200 | orchestrator | Monday 14 April 2025 01:09:01 +0000 (0:00:03.359) 0:02:24.021 ********** 2025-04-14 01:11:28.887208 | orchestrator | 2025-04-14 01:11:28.887217 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-14 01:11:28.887237 | orchestrator | Monday 14 April 2025 01:09:01 +0000 (0:00:00.061) 0:02:24.083 ********** 2025-04-14 01:11:28.887245 | orchestrator | 2025-04-14 01:11:28.887254 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-14 01:11:28.887262 | orchestrator | Monday 14 April 2025 01:09:01 +0000 (0:00:00.242) 0:02:24.325 ********** 2025-04-14 01:11:28.887271 | orchestrator | 2025-04-14 01:11:28.887284 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-14 01:11:28.887293 | orchestrator | Monday 14 April 2025 01:09:01 +0000 (0:00:00.058) 0:02:24.384 ********** 2025-04-14 01:11:28.887301 | orchestrator | 2025-04-14 01:11:28.887346 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-14 01:11:28.887357 | orchestrator | Monday 14 April 2025 01:09:02 +0000 (0:00:00.056) 0:02:24.441 ********** 2025-04-14 01:11:28.887366 | orchestrator | 2025-04-14 01:11:28.887374 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-14 01:11:28.887383 | orchestrator | Monday 14 April 2025 01:09:02 +0000 (0:00:00.055) 0:02:24.496 ********** 2025-04-14 01:11:28.887397 | orchestrator | 2025-04-14 01:11:28.887406 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-04-14 01:11:28.887414 | orchestrator | Monday 14 April 2025 01:09:02 +0000 (0:00:00.274) 0:02:24.771 ********** 2025-04-14 01:11:28.887423 | orchestrator | 2025-04-14 01:11:28.887431 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-04-14 01:11:28.887440 | orchestrator | Monday 14 April 2025 01:09:02 +0000 (0:00:00.156) 0:02:24.927 ********** 2025-04-14 01:11:28.887448 | orchestrator | changed: [testbed-manager] 2025-04-14 01:11:28.887457 | orchestrator | 2025-04-14 01:11:28.887466 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-04-14 01:11:28.887474 | orchestrator | Monday 14 April 2025 01:09:27 +0000 (0:00:24.682) 0:02:49.609 ********** 2025-04-14 01:11:28.887483 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:11:28.887491 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:11:28.887500 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:11:28.887509 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:11:28.887517 | orchestrator | changed: [testbed-manager] 2025-04-14 01:11:28.887526 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:11:28.887534 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:11:28.887543 | orchestrator | 2025-04-14 01:11:28.887552 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-04-14 01:11:28.887560 | orchestrator | Monday 14 April 2025 01:09:46 +0000 (0:00:19.403) 0:03:09.012 ********** 2025-04-14 01:11:28.887569 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:11:28.887581 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:11:28.887590 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:11:28.887599 | orchestrator | 2025-04-14 01:11:28.887607 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-04-14 01:11:28.887616 | orchestrator | Monday 14 April 2025 01:10:00 +0000 (0:00:14.368) 0:03:23.381 ********** 2025-04-14 01:11:28.887625 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:11:28.887633 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:11:28.887642 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:11:28.887650 | orchestrator | 2025-04-14 01:11:28.887659 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-04-14 01:11:28.887667 | orchestrator | Monday 14 April 2025 01:10:17 +0000 (0:00:16.120) 0:03:39.502 ********** 2025-04-14 01:11:28.887676 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:11:28.887684 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:11:28.887698 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:11:28.887707 | orchestrator | changed: [testbed-manager] 2025-04-14 01:11:28.887716 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:11:28.887724 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:11:28.887733 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:11:28.887742 | orchestrator | 2025-04-14 01:11:28.887750 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-04-14 01:11:28.887759 | orchestrator | Monday 14 April 2025 01:10:39 +0000 (0:00:22.009) 0:04:01.512 ********** 2025-04-14 01:11:28.887768 | orchestrator | changed: [testbed-manager] 2025-04-14 01:11:28.887776 | orchestrator | 2025-04-14 01:11:28.887785 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-04-14 01:11:28.887794 | orchestrator | Monday 14 April 2025 01:10:50 +0000 (0:00:11.044) 0:04:12.556 ********** 2025-04-14 01:11:28.887802 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:11:28.887811 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:11:28.887819 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:11:28.887828 | orchestrator | 2025-04-14 01:11:28.887837 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-04-14 01:11:28.887845 | orchestrator | Monday 14 April 2025 01:11:04 +0000 (0:00:14.601) 0:04:27.158 ********** 2025-04-14 01:11:28.887854 | orchestrator | changed: [testbed-manager] 2025-04-14 01:11:28.887862 | orchestrator | 2025-04-14 01:11:28.887871 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-04-14 01:11:28.887883 | orchestrator | Monday 14 April 2025 01:11:14 +0000 (0:00:09.384) 0:04:36.542 ********** 2025-04-14 01:11:28.887891 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:11:28.887899 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:11:28.887907 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:11:28.887915 | orchestrator | 2025-04-14 01:11:28.887923 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:11:28.887931 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-04-14 01:11:28.887940 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-14 01:11:28.887949 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-14 01:11:28.887957 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-04-14 01:11:28.887965 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-14 01:11:28.887973 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-14 01:11:28.887981 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-04-14 01:11:28.887989 | orchestrator | 2025-04-14 01:11:28.887997 | orchestrator | 2025-04-14 01:11:28.888005 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:11:28.888013 | orchestrator | Monday 14 April 2025 01:11:26 +0000 (0:00:12.800) 0:04:49.342 ********** 2025-04-14 01:11:28.888021 | orchestrator | =============================================================================== 2025-04-14 01:11:28.888029 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 44.08s 2025-04-14 01:11:28.888040 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 24.68s 2025-04-14 01:11:28.888048 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 22.01s 2025-04-14 01:11:28.888056 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.42s 2025-04-14 01:11:28.888064 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 19.40s 2025-04-14 01:11:28.888072 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 16.12s 2025-04-14 01:11:28.888080 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 14.60s 2025-04-14 01:11:28.888088 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 14.37s 2025-04-14 01:11:28.888096 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.80s 2025-04-14 01:11:28.888104 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.04s 2025-04-14 01:11:28.888112 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.38s 2025-04-14 01:11:28.888120 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 7.91s 2025-04-14 01:11:28.888128 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.55s 2025-04-14 01:11:28.888136 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.81s 2025-04-14 01:11:28.888144 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.14s 2025-04-14 01:11:28.888152 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.67s 2025-04-14 01:11:28.888160 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.59s 2025-04-14 01:11:28.888172 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.36s 2025-04-14 01:11:28.888183 | orchestrator | prometheus : Copying over prometheus msteams config file ---------------- 3.98s 2025-04-14 01:11:31.921838 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 3.95s 2025-04-14 01:11:31.921956 | orchestrator | 2025-04-14 01:11:28 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:31.921973 | orchestrator | 2025-04-14 01:11:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:31.921985 | orchestrator | 2025-04-14 01:11:28 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:31.921997 | orchestrator | 2025-04-14 01:11:28 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:31.922009 | orchestrator | 2025-04-14 01:11:28 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:31.922072 | orchestrator | 2025-04-14 01:11:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:31.922100 | orchestrator | 2025-04-14 01:11:31 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:31.922674 | orchestrator | 2025-04-14 01:11:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:31.923396 | orchestrator | 2025-04-14 01:11:31 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:31.924403 | orchestrator | 2025-04-14 01:11:31 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:31.925738 | orchestrator | 2025-04-14 01:11:31 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:34.979714 | orchestrator | 2025-04-14 01:11:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:34.979807 | orchestrator | 2025-04-14 01:11:34 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:34.981490 | orchestrator | 2025-04-14 01:11:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:34.983022 | orchestrator | 2025-04-14 01:11:34 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:34.984404 | orchestrator | 2025-04-14 01:11:34 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:34.985818 | orchestrator | 2025-04-14 01:11:34 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:38.039193 | orchestrator | 2025-04-14 01:11:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:38.039409 | orchestrator | 2025-04-14 01:11:38 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:38.040579 | orchestrator | 2025-04-14 01:11:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:38.041949 | orchestrator | 2025-04-14 01:11:38 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:38.045427 | orchestrator | 2025-04-14 01:11:38 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:38.046201 | orchestrator | 2025-04-14 01:11:38 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:41.096469 | orchestrator | 2025-04-14 01:11:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:41.096622 | orchestrator | 2025-04-14 01:11:41 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:41.097790 | orchestrator | 2025-04-14 01:11:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:41.098169 | orchestrator | 2025-04-14 01:11:41 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:41.099099 | orchestrator | 2025-04-14 01:11:41 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:41.100261 | orchestrator | 2025-04-14 01:11:41 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:44.154498 | orchestrator | 2025-04-14 01:11:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:44.154759 | orchestrator | 2025-04-14 01:11:44 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:44.154909 | orchestrator | 2025-04-14 01:11:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:44.155524 | orchestrator | 2025-04-14 01:11:44 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:44.156402 | orchestrator | 2025-04-14 01:11:44 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:44.159064 | orchestrator | 2025-04-14 01:11:44 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:47.203187 | orchestrator | 2025-04-14 01:11:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:47.203367 | orchestrator | 2025-04-14 01:11:47 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:47.203910 | orchestrator | 2025-04-14 01:11:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:47.205158 | orchestrator | 2025-04-14 01:11:47 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:47.206489 | orchestrator | 2025-04-14 01:11:47 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:47.207145 | orchestrator | 2025-04-14 01:11:47 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:50.250344 | orchestrator | 2025-04-14 01:11:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:50.250539 | orchestrator | 2025-04-14 01:11:50 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:50.250804 | orchestrator | 2025-04-14 01:11:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:50.250938 | orchestrator | 2025-04-14 01:11:50 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:50.251567 | orchestrator | 2025-04-14 01:11:50 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:50.252350 | orchestrator | 2025-04-14 01:11:50 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:53.300561 | orchestrator | 2025-04-14 01:11:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:53.300698 | orchestrator | 2025-04-14 01:11:53 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:53.301156 | orchestrator | 2025-04-14 01:11:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:53.301194 | orchestrator | 2025-04-14 01:11:53 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:53.303059 | orchestrator | 2025-04-14 01:11:53 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:53.304257 | orchestrator | 2025-04-14 01:11:53 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:56.345913 | orchestrator | 2025-04-14 01:11:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:56.346135 | orchestrator | 2025-04-14 01:11:56 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:56.346948 | orchestrator | 2025-04-14 01:11:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:56.348536 | orchestrator | 2025-04-14 01:11:56 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:56.349311 | orchestrator | 2025-04-14 01:11:56 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:56.349343 | orchestrator | 2025-04-14 01:11:56 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:11:59.393227 | orchestrator | 2025-04-14 01:11:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:11:59.393426 | orchestrator | 2025-04-14 01:11:59 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:11:59.393750 | orchestrator | 2025-04-14 01:11:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:11:59.394835 | orchestrator | 2025-04-14 01:11:59 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:11:59.397265 | orchestrator | 2025-04-14 01:11:59 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:11:59.401353 | orchestrator | 2025-04-14 01:11:59 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:12:02.454232 | orchestrator | 2025-04-14 01:11:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:02.454478 | orchestrator | 2025-04-14 01:12:02 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:02.454795 | orchestrator | 2025-04-14 01:12:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:02.455966 | orchestrator | 2025-04-14 01:12:02 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:12:02.457513 | orchestrator | 2025-04-14 01:12:02 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:02.459038 | orchestrator | 2025-04-14 01:12:02 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state STARTED 2025-04-14 01:12:05.541090 | orchestrator | 2025-04-14 01:12:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:05.541251 | orchestrator | 2025-04-14 01:12:05 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:05.543029 | orchestrator | 2025-04-14 01:12:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:05.543565 | orchestrator | 2025-04-14 01:12:05 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:12:05.544829 | orchestrator | 2025-04-14 01:12:05 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:05.549842 | orchestrator | 2025-04-14 01:12:05 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:05.552716 | orchestrator | 2025-04-14 01:12:05 | INFO  | Task 188bac4e-6386-4afa-aef4-0a151b336cdc is in state SUCCESS 2025-04-14 01:12:05.554570 | orchestrator | 2025-04-14 01:12:05.554620 | orchestrator | 2025-04-14 01:12:05.554646 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:12:05.554672 | orchestrator | 2025-04-14 01:12:05.554698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:12:05.554725 | orchestrator | Monday 14 April 2025 01:08:46 +0000 (0:00:00.907) 0:00:00.907 ********** 2025-04-14 01:12:05.554746 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:12:05.554762 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:12:05.554776 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:12:05.554790 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:12:05.554829 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:12:05.554845 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:12:05.554859 | orchestrator | 2025-04-14 01:12:05.554874 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:12:05.554888 | orchestrator | Monday 14 April 2025 01:08:48 +0000 (0:00:01.533) 0:00:02.441 ********** 2025-04-14 01:12:05.554902 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-04-14 01:12:05.554916 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-04-14 01:12:05.554930 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-04-14 01:12:05.554944 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-04-14 01:12:05.554958 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-04-14 01:12:05.554972 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-04-14 01:12:05.554986 | orchestrator | 2025-04-14 01:12:05.555000 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-04-14 01:12:05.555013 | orchestrator | 2025-04-14 01:12:05.555027 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-14 01:12:05.555041 | orchestrator | Monday 14 April 2025 01:08:49 +0000 (0:00:00.953) 0:00:03.395 ********** 2025-04-14 01:12:05.555055 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:12:05.555071 | orchestrator | 2025-04-14 01:12:05.555085 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-04-14 01:12:05.555098 | orchestrator | Monday 14 April 2025 01:08:52 +0000 (0:00:02.948) 0:00:06.343 ********** 2025-04-14 01:12:05.555113 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-04-14 01:12:05.555127 | orchestrator | 2025-04-14 01:12:05.555143 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-04-14 01:12:05.555159 | orchestrator | Monday 14 April 2025 01:08:55 +0000 (0:00:03.389) 0:00:09.733 ********** 2025-04-14 01:12:05.555175 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-04-14 01:12:05.555191 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-04-14 01:12:05.555206 | orchestrator | 2025-04-14 01:12:05.555222 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-04-14 01:12:05.555249 | orchestrator | Monday 14 April 2025 01:09:02 +0000 (0:00:06.722) 0:00:16.456 ********** 2025-04-14 01:12:05.555265 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-14 01:12:05.555307 | orchestrator | 2025-04-14 01:12:05.555323 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-04-14 01:12:05.555339 | orchestrator | Monday 14 April 2025 01:09:06 +0000 (0:00:03.778) 0:00:20.235 ********** 2025-04-14 01:12:05.555354 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:12:05.555370 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-04-14 01:12:05.555386 | orchestrator | 2025-04-14 01:12:05.555401 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-04-14 01:12:05.555418 | orchestrator | Monday 14 April 2025 01:09:10 +0000 (0:00:03.956) 0:00:24.191 ********** 2025-04-14 01:12:05.555434 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:12:05.555450 | orchestrator | 2025-04-14 01:12:05.555465 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-04-14 01:12:05.555481 | orchestrator | Monday 14 April 2025 01:09:13 +0000 (0:00:03.323) 0:00:27.515 ********** 2025-04-14 01:12:05.555497 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-04-14 01:12:05.555650 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-04-14 01:12:05.555667 | orchestrator | 2025-04-14 01:12:05.555681 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-04-14 01:12:05.555704 | orchestrator | Monday 14 April 2025 01:09:22 +0000 (0:00:08.580) 0:00:36.095 ********** 2025-04-14 01:12:05.555770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.555792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.555808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.555824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.555839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.555865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.555900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.555917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.555932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.555947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.555961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.555999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.556016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.556031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.556046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.556077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.556100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.556116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.556131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.556145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.556170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.556192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.556215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.556230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.556245 | orchestrator | 2025-04-14 01:12:05.556259 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-14 01:12:05.556317 | orchestrator | Monday 14 April 2025 01:09:25 +0000 (0:00:03.451) 0:00:39.547 ********** 2025-04-14 01:12:05.556333 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.556349 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:05.556365 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:05.556381 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:12:05.556396 | orchestrator | 2025-04-14 01:12:05.556412 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-04-14 01:12:05.556427 | orchestrator | Monday 14 April 2025 01:09:27 +0000 (0:00:01.507) 0:00:41.054 ********** 2025-04-14 01:12:05.556443 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-04-14 01:12:05.556458 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-04-14 01:12:05.556474 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-04-14 01:12:05.556496 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-04-14 01:12:05.556512 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-04-14 01:12:05.556527 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-04-14 01:12:05.556543 | orchestrator | 2025-04-14 01:12:05.556558 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-04-14 01:12:05.556574 | orchestrator | Monday 14 April 2025 01:09:33 +0000 (0:00:06.109) 0:00:47.163 ********** 2025-04-14 01:12:05.556590 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-14 01:12:05.556608 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-14 01:12:05.556634 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-14 01:12:05.556669 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-14 01:12:05.556686 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-14 01:12:05.556709 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-04-14 01:12:05.556725 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-14 01:12:05.556750 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-14 01:12:05.557051 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-14 01:12:05.557085 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-14 01:12:05.557102 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-14 01:12:05.557126 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-04-14 01:12:05.557142 | orchestrator | 2025-04-14 01:12:05.557158 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-04-14 01:12:05.557174 | orchestrator | Monday 14 April 2025 01:09:39 +0000 (0:00:06.190) 0:00:53.354 ********** 2025-04-14 01:12:05.557190 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:05.557205 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:05.557221 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:05.557237 | orchestrator | 2025-04-14 01:12:05.557252 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-04-14 01:12:05.557322 | orchestrator | Monday 14 April 2025 01:09:41 +0000 (0:00:02.325) 0:00:55.680 ********** 2025-04-14 01:12:05.557340 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-04-14 01:12:05.557355 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-04-14 01:12:05.557369 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-04-14 01:12:05.557382 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-04-14 01:12:05.557396 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-04-14 01:12:05.557410 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-04-14 01:12:05.557432 | orchestrator | 2025-04-14 01:12:05.557446 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-04-14 01:12:05.557460 | orchestrator | Monday 14 April 2025 01:09:45 +0000 (0:00:03.728) 0:00:59.409 ********** 2025-04-14 01:12:05.557474 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-04-14 01:12:05.557488 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-04-14 01:12:05.557502 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-04-14 01:12:05.557516 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-04-14 01:12:05.557531 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-04-14 01:12:05.557605 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-04-14 01:12:05.557621 | orchestrator | 2025-04-14 01:12:05.557649 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-04-14 01:12:05.557665 | orchestrator | Monday 14 April 2025 01:09:46 +0000 (0:00:01.334) 0:01:00.743 ********** 2025-04-14 01:12:05.557692 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.557708 | orchestrator | 2025-04-14 01:12:05.557723 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-04-14 01:12:05.557773 | orchestrator | Monday 14 April 2025 01:09:46 +0000 (0:00:00.232) 0:01:00.976 ********** 2025-04-14 01:12:05.557789 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.557840 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:05.557857 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:05.557872 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:12:05.557886 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:12:05.557900 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:12:05.557913 | orchestrator | 2025-04-14 01:12:05.557927 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-14 01:12:05.557941 | orchestrator | Monday 14 April 2025 01:09:48 +0000 (0:00:01.239) 0:01:02.215 ********** 2025-04-14 01:12:05.557956 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:12:05.557971 | orchestrator | 2025-04-14 01:12:05.557985 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-04-14 01:12:05.557998 | orchestrator | Monday 14 April 2025 01:09:50 +0000 (0:00:02.117) 0:01:04.333 ********** 2025-04-14 01:12:05.558074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.558109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.558150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.558165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558223 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558329 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.558343 | orchestrator | 2025-04-14 01:12:05.558356 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-04-14 01:12:05.558369 | orchestrator | Monday 14 April 2025 01:09:54 +0000 (0:00:03.898) 0:01:08.232 ********** 2025-04-14 01:12:05.558394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.558408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558421 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.558434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.558447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.558495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558508 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:05.558521 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:05.558534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558560 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:12:05.558573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558617 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:12:05.558636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558662 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:12:05.558675 | orchestrator | 2025-04-14 01:12:05.558687 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-04-14 01:12:05.558700 | orchestrator | Monday 14 April 2025 01:09:56 +0000 (0:00:02.104) 0:01:10.336 ********** 2025-04-14 01:12:05.558712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.558726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.558787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.558816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558829 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.558842 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:05.558854 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:05.558867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558909 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:12:05.558937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558964 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:12:05.558977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.558990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559002 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:12:05.559015 | orchestrator | 2025-04-14 01:12:05.559027 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-04-14 01:12:05.559047 | orchestrator | Monday 14 April 2025 01:09:58 +0000 (0:00:02.596) 0:01:12.932 ********** 2025-04-14 01:12:05.559060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.559089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.559116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.559149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.559170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.559204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.559236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559583 | orchestrator | 2025-04-14 01:12:05.559596 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-04-14 01:12:05.559609 | orchestrator | Monday 14 April 2025 01:10:02 +0000 (0:00:03.840) 0:01:16.773 ********** 2025-04-14 01:12:05.559621 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-14 01:12:05.559634 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:12:05.559647 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-14 01:12:05.559659 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-14 01:12:05.559672 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:12:05.559685 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-04-14 01:12:05.559697 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:12:05.559714 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-14 01:12:05.559727 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-04-14 01:12:05.559745 | orchestrator | 2025-04-14 01:12:05.559757 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-04-14 01:12:05.559770 | orchestrator | Monday 14 April 2025 01:10:08 +0000 (0:00:05.457) 0:01:22.232 ********** 2025-04-14 01:12:05.559782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.559795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.559829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.559861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.559884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.559898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.559918 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.559960 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559975 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.559988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.560007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.560063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.560111 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.560140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.560155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.560193 | orchestrator | 2025-04-14 01:12:05.560211 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-04-14 01:12:05.560224 | orchestrator | Monday 14 April 2025 01:10:21 +0000 (0:00:13.011) 0:01:35.243 ********** 2025-04-14 01:12:05.560237 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:05.560250 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.560262 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:05.560291 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:12:05.560304 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:12:05.560317 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:12:05.560329 | orchestrator | 2025-04-14 01:12:05.560342 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-04-14 01:12:05.560583 | orchestrator | Monday 14 April 2025 01:10:28 +0000 (0:00:07.157) 0:01:42.401 ********** 2025-04-14 01:12:05.560605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.560619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.560688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560728 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.560741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.560761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.560780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560874 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:05.560888 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:05.560901 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:12:05.560914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.560927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.560967 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:12:05.560993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.561055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561097 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:12:05.561109 | orchestrator | 2025-04-14 01:12:05.561122 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-04-14 01:12:05.561135 | orchestrator | Monday 14 April 2025 01:10:31 +0000 (0:00:02.984) 0:01:45.385 ********** 2025-04-14 01:12:05.561147 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.561160 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:05.561172 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:05.561184 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:12:05.561197 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:12:05.561209 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:12:05.561221 | orchestrator | 2025-04-14 01:12:05.561234 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-04-14 01:12:05.561246 | orchestrator | Monday 14 April 2025 01:10:32 +0000 (0:00:01.256) 0:01:46.642 ********** 2025-04-14 01:12:05.561266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-a2025-04-14 01:12:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:05.561306 | orchestrator | pi', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.561321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.561347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.561388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.561402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-04-14 01:12:05.561416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-04-14 01:12:05.561428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561441 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561469 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-04-14 01:12:05.561635 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-04-14 01:12:05.561675 | orchestrator | 2025-04-14 01:12:05.561687 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-04-14 01:12:05.561700 | orchestrator | Monday 14 April 2025 01:10:36 +0000 (0:00:03.481) 0:01:50.124 ********** 2025-04-14 01:12:05.561712 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:05.561725 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:05.561737 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:05.561750 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:12:05.561762 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:12:05.561884 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:12:05.561898 | orchestrator | 2025-04-14 01:12:05.561910 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-04-14 01:12:05.561923 | orchestrator | Monday 14 April 2025 01:10:37 +0000 (0:00:01.056) 0:01:51.180 ********** 2025-04-14 01:12:05.561935 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:05.561948 | orchestrator | 2025-04-14 01:12:05.561961 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-04-14 01:12:05.561982 | orchestrator | Monday 14 April 2025 01:10:39 +0000 (0:00:02.495) 0:01:53.675 ********** 2025-04-14 01:12:05.561995 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:05.562007 | orchestrator | 2025-04-14 01:12:05.562053 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-04-14 01:12:05.562065 | orchestrator | Monday 14 April 2025 01:10:42 +0000 (0:00:02.318) 0:01:55.994 ********** 2025-04-14 01:12:05.562078 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:05.562090 | orchestrator | 2025-04-14 01:12:05.562103 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-14 01:12:05.562115 | orchestrator | Monday 14 April 2025 01:11:00 +0000 (0:00:18.023) 0:02:14.018 ********** 2025-04-14 01:12:05.562127 | orchestrator | 2025-04-14 01:12:05.562140 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-14 01:12:05.562152 | orchestrator | Monday 14 April 2025 01:11:00 +0000 (0:00:00.059) 0:02:14.078 ********** 2025-04-14 01:12:05.562197 | orchestrator | 2025-04-14 01:12:05.562210 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-14 01:12:05.562222 | orchestrator | Monday 14 April 2025 01:11:00 +0000 (0:00:00.283) 0:02:14.361 ********** 2025-04-14 01:12:05.562235 | orchestrator | 2025-04-14 01:12:05.562354 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-14 01:12:05.562373 | orchestrator | Monday 14 April 2025 01:11:00 +0000 (0:00:00.056) 0:02:14.417 ********** 2025-04-14 01:12:05.562385 | orchestrator | 2025-04-14 01:12:05.562398 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-14 01:12:05.562410 | orchestrator | Monday 14 April 2025 01:11:00 +0000 (0:00:00.055) 0:02:14.473 ********** 2025-04-14 01:12:05.562422 | orchestrator | 2025-04-14 01:12:05.562435 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-04-14 01:12:05.562447 | orchestrator | Monday 14 April 2025 01:11:00 +0000 (0:00:00.056) 0:02:14.529 ********** 2025-04-14 01:12:05.562460 | orchestrator | 2025-04-14 01:12:05.562472 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-04-14 01:12:05.562492 | orchestrator | Monday 14 April 2025 01:11:00 +0000 (0:00:00.249) 0:02:14.779 ********** 2025-04-14 01:12:08.599839 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:08.599969 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:12:08.599989 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:12:08.600004 | orchestrator | 2025-04-14 01:12:08.600021 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-04-14 01:12:08.600037 | orchestrator | Monday 14 April 2025 01:11:22 +0000 (0:00:22.147) 0:02:36.927 ********** 2025-04-14 01:12:08.600052 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:08.600067 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:12:08.600083 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:12:08.600097 | orchestrator | 2025-04-14 01:12:08.600112 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-04-14 01:12:08.600127 | orchestrator | Monday 14 April 2025 01:11:28 +0000 (0:00:05.519) 0:02:42.446 ********** 2025-04-14 01:12:08.600331 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:12:08.600355 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:12:08.600369 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:12:08.600384 | orchestrator | 2025-04-14 01:12:08.600398 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-04-14 01:12:08.600412 | orchestrator | Monday 14 April 2025 01:11:51 +0000 (0:00:22.676) 0:03:05.123 ********** 2025-04-14 01:12:08.600426 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:12:08.600440 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:12:08.600454 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:12:08.600468 | orchestrator | 2025-04-14 01:12:08.600482 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-04-14 01:12:08.600497 | orchestrator | Monday 14 April 2025 01:12:03 +0000 (0:00:11.899) 0:03:17.023 ********** 2025-04-14 01:12:08.600537 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:08.600552 | orchestrator | 2025-04-14 01:12:08.600566 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:12:08.600581 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-04-14 01:12:08.600597 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-14 01:12:08.600612 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-04-14 01:12:08.600626 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:12:08.600640 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:12:08.600654 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:12:08.600668 | orchestrator | 2025-04-14 01:12:08.600682 | orchestrator | 2025-04-14 01:12:08.600696 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:12:08.600710 | orchestrator | Monday 14 April 2025 01:12:03 +0000 (0:00:00.744) 0:03:17.767 ********** 2025-04-14 01:12:08.600724 | orchestrator | =============================================================================== 2025-04-14 01:12:08.600738 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.68s 2025-04-14 01:12:08.600752 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.15s 2025-04-14 01:12:08.600766 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.02s 2025-04-14 01:12:08.600780 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.01s 2025-04-14 01:12:08.600794 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.90s 2025-04-14 01:12:08.600808 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.58s 2025-04-14 01:12:08.600822 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 7.16s 2025-04-14 01:12:08.600836 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.72s 2025-04-14 01:12:08.600849 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.19s 2025-04-14 01:12:08.600863 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 6.11s 2025-04-14 01:12:08.600877 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.52s 2025-04-14 01:12:08.600905 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 5.46s 2025-04-14 01:12:08.600922 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.96s 2025-04-14 01:12:08.600939 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.90s 2025-04-14 01:12:08.600954 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.84s 2025-04-14 01:12:08.600970 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.78s 2025-04-14 01:12:08.600985 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.73s 2025-04-14 01:12:08.601002 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.48s 2025-04-14 01:12:08.601017 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.45s 2025-04-14 01:12:08.601033 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.39s 2025-04-14 01:12:08.601065 | orchestrator | 2025-04-14 01:12:08 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:08.601635 | orchestrator | 2025-04-14 01:12:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:08.601817 | orchestrator | 2025-04-14 01:12:08 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:12:08.603713 | orchestrator | 2025-04-14 01:12:08 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:08.605818 | orchestrator | 2025-04-14 01:12:08 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:11.657935 | orchestrator | 2025-04-14 01:12:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:11.658126 | orchestrator | 2025-04-14 01:12:11 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:11.658492 | orchestrator | 2025-04-14 01:12:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:11.658531 | orchestrator | 2025-04-14 01:12:11 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:12:11.661339 | orchestrator | 2025-04-14 01:12:11 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:14.708655 | orchestrator | 2025-04-14 01:12:11 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:14.708754 | orchestrator | 2025-04-14 01:12:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:14.708781 | orchestrator | 2025-04-14 01:12:14 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:14.708959 | orchestrator | 2025-04-14 01:12:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:14.708981 | orchestrator | 2025-04-14 01:12:14 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state STARTED 2025-04-14 01:12:14.709686 | orchestrator | 2025-04-14 01:12:14 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:14.711603 | orchestrator | 2025-04-14 01:12:14 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:17.765635 | orchestrator | 2025-04-14 01:12:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:17.765779 | orchestrator | 2025-04-14 01:12:17 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:17.766545 | orchestrator | 2025-04-14 01:12:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:17.768432 | orchestrator | 2025-04-14 01:12:17 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:17.770785 | orchestrator | 2025-04-14 01:12:17 | INFO  | Task 71feb051-7e7a-482f-a8db-0918b956ff0e is in state SUCCESS 2025-04-14 01:12:17.772784 | orchestrator | 2025-04-14 01:12:17.772822 | orchestrator | 2025-04-14 01:12:17.772838 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:12:17.772852 | orchestrator | 2025-04-14 01:12:17.772866 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:12:17.772881 | orchestrator | Monday 14 April 2025 01:08:28 +0000 (0:00:00.433) 0:00:00.433 ********** 2025-04-14 01:12:17.772896 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:12:17.772912 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:12:17.772926 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:12:17.772940 | orchestrator | 2025-04-14 01:12:17.772954 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:12:17.772968 | orchestrator | Monday 14 April 2025 01:08:29 +0000 (0:00:00.944) 0:00:01.378 ********** 2025-04-14 01:12:17.772982 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-04-14 01:12:17.772996 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-04-14 01:12:17.773010 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-04-14 01:12:17.773051 | orchestrator | 2025-04-14 01:12:17.773066 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-04-14 01:12:17.773159 | orchestrator | 2025-04-14 01:12:17.773179 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-14 01:12:17.773511 | orchestrator | Monday 14 April 2025 01:08:30 +0000 (0:00:01.001) 0:00:02.379 ********** 2025-04-14 01:12:17.773534 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:12:17.773550 | orchestrator | 2025-04-14 01:12:17.773564 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-04-14 01:12:17.773578 | orchestrator | Monday 14 April 2025 01:08:32 +0000 (0:00:01.492) 0:00:03.872 ********** 2025-04-14 01:12:17.773592 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-04-14 01:12:17.773606 | orchestrator | 2025-04-14 01:12:17.773620 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-04-14 01:12:17.773634 | orchestrator | Monday 14 April 2025 01:08:35 +0000 (0:00:03.305) 0:00:07.177 ********** 2025-04-14 01:12:17.773648 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-04-14 01:12:17.773662 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-04-14 01:12:17.773676 | orchestrator | 2025-04-14 01:12:17.773691 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-04-14 01:12:17.773705 | orchestrator | Monday 14 April 2025 01:08:42 +0000 (0:00:06.574) 0:00:13.751 ********** 2025-04-14 01:12:17.773719 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-14 01:12:17.773734 | orchestrator | 2025-04-14 01:12:17.773747 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-04-14 01:12:17.773762 | orchestrator | Monday 14 April 2025 01:08:45 +0000 (0:00:03.492) 0:00:17.244 ********** 2025-04-14 01:12:17.773776 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:12:17.773790 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-04-14 01:12:17.773804 | orchestrator | 2025-04-14 01:12:17.773818 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-04-14 01:12:17.773833 | orchestrator | Monday 14 April 2025 01:08:49 +0000 (0:00:03.823) 0:00:21.068 ********** 2025-04-14 01:12:17.773846 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:12:17.773861 | orchestrator | 2025-04-14 01:12:17.773874 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-04-14 01:12:17.773888 | orchestrator | Monday 14 April 2025 01:08:52 +0000 (0:00:03.174) 0:00:24.242 ********** 2025-04-14 01:12:17.773902 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-04-14 01:12:17.773916 | orchestrator | 2025-04-14 01:12:17.773930 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-04-14 01:12:17.773944 | orchestrator | Monday 14 April 2025 01:08:57 +0000 (0:00:04.456) 0:00:28.698 ********** 2025-04-14 01:12:17.773973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.774006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.774073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.774123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.774142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.774168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.774193 | orchestrator | 2025-04-14 01:12:17.774209 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-14 01:12:17.774225 | orchestrator | Monday 14 April 2025 01:09:01 +0000 (0:00:04.164) 0:00:32.863 ********** 2025-04-14 01:12:17.774241 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:12:17.774281 | orchestrator | 2025-04-14 01:12:17.774298 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-04-14 01:12:17.774482 | orchestrator | Monday 14 April 2025 01:09:01 +0000 (0:00:00.639) 0:00:33.502 ********** 2025-04-14 01:12:17.774505 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:12:17.774519 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:17.774534 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:12:17.774548 | orchestrator | 2025-04-14 01:12:17.774563 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-04-14 01:12:17.774577 | orchestrator | Monday 14 April 2025 01:09:11 +0000 (0:00:09.877) 0:00:43.380 ********** 2025-04-14 01:12:17.774591 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:17.774606 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:17.774620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:17.774634 | orchestrator | 2025-04-14 01:12:17.774648 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-04-14 01:12:17.774662 | orchestrator | Monday 14 April 2025 01:09:13 +0000 (0:00:02.223) 0:00:45.603 ********** 2025-04-14 01:12:17.774676 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:17.774691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:17.774705 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-04-14 01:12:17.774728 | orchestrator | 2025-04-14 01:12:17.774742 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-04-14 01:12:17.774756 | orchestrator | Monday 14 April 2025 01:09:15 +0000 (0:00:01.422) 0:00:47.026 ********** 2025-04-14 01:12:17.774770 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:12:17.774791 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:12:17.774806 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:12:17.774820 | orchestrator | 2025-04-14 01:12:17.774834 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-04-14 01:12:17.774848 | orchestrator | Monday 14 April 2025 01:09:16 +0000 (0:00:00.669) 0:00:47.696 ********** 2025-04-14 01:12:17.774862 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.774877 | orchestrator | 2025-04-14 01:12:17.774891 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-04-14 01:12:17.774905 | orchestrator | Monday 14 April 2025 01:09:16 +0000 (0:00:00.271) 0:00:47.967 ********** 2025-04-14 01:12:17.774918 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.774938 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.774952 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.774966 | orchestrator | 2025-04-14 01:12:17.774980 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-14 01:12:17.774994 | orchestrator | Monday 14 April 2025 01:09:16 +0000 (0:00:00.272) 0:00:48.239 ********** 2025-04-14 01:12:17.775008 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:12:17.775022 | orchestrator | 2025-04-14 01:12:17.775036 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-04-14 01:12:17.775050 | orchestrator | Monday 14 April 2025 01:09:17 +0000 (0:00:00.781) 0:00:49.021 ********** 2025-04-14 01:12:17.775076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.775094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.775125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.775144 | orchestrator | 2025-04-14 01:12:17.775160 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-04-14 01:12:17.775176 | orchestrator | Monday 14 April 2025 01:09:22 +0000 (0:00:04.671) 0:00:53.693 ********** 2025-04-14 01:12:17.775193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 01:12:17.775217 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.775242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 01:12:17.775295 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.775312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 01:12:17.775337 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.775353 | orchestrator | 2025-04-14 01:12:17.775369 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-04-14 01:12:17.775385 | orchestrator | Monday 14 April 2025 01:09:28 +0000 (0:00:06.056) 0:00:59.749 ********** 2025-04-14 01:12:17.775408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 01:12:17.775426 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.775443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 01:12:17.775467 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.775483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-04-14 01:12:17.775499 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.775513 | orchestrator | 2025-04-14 01:12:17.775528 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-04-14 01:12:17.775542 | orchestrator | Monday 14 April 2025 01:09:38 +0000 (0:00:10.584) 0:01:10.334 ********** 2025-04-14 01:12:17.775556 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.775570 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.775584 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.775598 | orchestrator | 2025-04-14 01:12:17.775616 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-04-14 01:12:17.775631 | orchestrator | Monday 14 April 2025 01:09:43 +0000 (0:00:04.598) 0:01:14.932 ********** 2025-04-14 01:12:17.775646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.775668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.775692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.775721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.775745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.775761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.775783 | orchestrator | 2025-04-14 01:12:17.775798 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-04-14 01:12:17.775812 | orchestrator | Monday 14 April 2025 01:09:48 +0000 (0:00:05.164) 0:01:20.097 ********** 2025-04-14 01:12:17.775826 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:17.775840 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:12:17.775854 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:12:17.775868 | orchestrator | 2025-04-14 01:12:17.775882 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-04-14 01:12:17.775896 | orchestrator | Monday 14 April 2025 01:10:01 +0000 (0:00:12.987) 0:01:33.085 ********** 2025-04-14 01:12:17.775910 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.775924 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.775938 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.775952 | orchestrator | 2025-04-14 01:12:17.775966 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-04-14 01:12:17.775980 | orchestrator | Monday 14 April 2025 01:10:15 +0000 (0:00:14.304) 0:01:47.389 ********** 2025-04-14 01:12:17.775995 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.776008 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.776023 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.776037 | orchestrator | 2025-04-14 01:12:17.776051 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-04-14 01:12:17.776070 | orchestrator | Monday 14 April 2025 01:10:33 +0000 (0:00:17.454) 0:02:04.844 ********** 2025-04-14 01:12:17.776085 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.776099 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.776113 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.776127 | orchestrator | 2025-04-14 01:12:17.776141 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-04-14 01:12:17.776155 | orchestrator | Monday 14 April 2025 01:10:41 +0000 (0:00:07.906) 0:02:12.751 ********** 2025-04-14 01:12:17.776169 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.776188 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.776202 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.776216 | orchestrator | 2025-04-14 01:12:17.776230 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-04-14 01:12:17.776244 | orchestrator | Monday 14 April 2025 01:10:49 +0000 (0:00:08.817) 0:02:21.568 ********** 2025-04-14 01:12:17.776310 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.776335 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.776349 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.776363 | orchestrator | 2025-04-14 01:12:17.776377 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-04-14 01:12:17.776391 | orchestrator | Monday 14 April 2025 01:10:50 +0000 (0:00:00.909) 0:02:22.477 ********** 2025-04-14 01:12:17.776405 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-14 01:12:17.776419 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.776433 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-14 01:12:17.776447 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.776461 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-04-14 01:12:17.776475 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.776489 | orchestrator | 2025-04-14 01:12:17.776503 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-04-14 01:12:17.776517 | orchestrator | Monday 14 April 2025 01:10:57 +0000 (0:00:07.001) 0:02:29.479 ********** 2025-04-14 01:12:17.776531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.776554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.776573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.776593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.776613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-04-14 01:12:17.776628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-04-14 01:12:17.776647 | orchestrator | 2025-04-14 01:12:17.776659 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-04-14 01:12:17.776672 | orchestrator | Monday 14 April 2025 01:11:02 +0000 (0:00:05.014) 0:02:34.493 ********** 2025-04-14 01:12:17.776684 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:12:17.776697 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:12:17.776709 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:12:17.776722 | orchestrator | 2025-04-14 01:12:17.776739 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-04-14 01:12:17.776752 | orchestrator | Monday 14 April 2025 01:11:03 +0000 (0:00:00.527) 0:02:35.021 ********** 2025-04-14 01:12:17.776764 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:17.776777 | orchestrator | 2025-04-14 01:12:17.776789 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-04-14 01:12:17.776802 | orchestrator | Monday 14 April 2025 01:11:05 +0000 (0:00:02.208) 0:02:37.230 ********** 2025-04-14 01:12:17.776814 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:17.776826 | orchestrator | 2025-04-14 01:12:17.776839 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-04-14 01:12:17.776851 | orchestrator | Monday 14 April 2025 01:11:07 +0000 (0:00:02.283) 0:02:39.513 ********** 2025-04-14 01:12:17.776864 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:17.776876 | orchestrator | 2025-04-14 01:12:17.776889 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-04-14 01:12:17.776901 | orchestrator | Monday 14 April 2025 01:11:10 +0000 (0:00:02.112) 0:02:41.626 ********** 2025-04-14 01:12:17.776913 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:17.776926 | orchestrator | 2025-04-14 01:12:17.776938 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-04-14 01:12:17.776951 | orchestrator | Monday 14 April 2025 01:11:38 +0000 (0:00:28.564) 0:03:10.190 ********** 2025-04-14 01:12:17.776963 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:17.776975 | orchestrator | 2025-04-14 01:12:17.776988 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-14 01:12:17.777000 | orchestrator | Monday 14 April 2025 01:11:40 +0000 (0:00:01.699) 0:03:11.890 ********** 2025-04-14 01:12:17.777012 | orchestrator | 2025-04-14 01:12:17.777024 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-14 01:12:17.777037 | orchestrator | Monday 14 April 2025 01:11:40 +0000 (0:00:00.061) 0:03:11.952 ********** 2025-04-14 01:12:17.777049 | orchestrator | 2025-04-14 01:12:17.777061 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-04-14 01:12:17.777074 | orchestrator | Monday 14 April 2025 01:11:40 +0000 (0:00:00.056) 0:03:12.009 ********** 2025-04-14 01:12:17.777086 | orchestrator | 2025-04-14 01:12:17.777098 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-04-14 01:12:17.777110 | orchestrator | Monday 14 April 2025 01:11:40 +0000 (0:00:00.204) 0:03:12.213 ********** 2025-04-14 01:12:17.777123 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:12:17.777135 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:12:17.777147 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:12:17.777160 | orchestrator | 2025-04-14 01:12:17.777172 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:12:17.777185 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-04-14 01:12:17.777200 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-14 01:12:17.777213 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-04-14 01:12:17.777225 | orchestrator | 2025-04-14 01:12:17.777237 | orchestrator | 2025-04-14 01:12:17.777250 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:12:17.777284 | orchestrator | Monday 14 April 2025 01:12:15 +0000 (0:00:35.094) 0:03:47.308 ********** 2025-04-14 01:12:17.777303 | orchestrator | =============================================================================== 2025-04-14 01:12:17.777316 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.09s 2025-04-14 01:12:17.777328 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.56s 2025-04-14 01:12:17.777340 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 17.45s 2025-04-14 01:12:17.777353 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 14.30s 2025-04-14 01:12:17.777365 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 12.99s 2025-04-14 01:12:17.777377 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ----- 10.58s 2025-04-14 01:12:17.777389 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 9.88s 2025-04-14 01:12:17.777402 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 8.82s 2025-04-14 01:12:17.777414 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 7.91s 2025-04-14 01:12:17.777426 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 7.00s 2025-04-14 01:12:17.777438 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.57s 2025-04-14 01:12:17.777450 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.06s 2025-04-14 01:12:17.777463 | orchestrator | glance : Copying over config.json files for services -------------------- 5.17s 2025-04-14 01:12:17.777475 | orchestrator | glance : Check glance containers ---------------------------------------- 5.01s 2025-04-14 01:12:17.777487 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.67s 2025-04-14 01:12:17.777500 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.60s 2025-04-14 01:12:17.777512 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.46s 2025-04-14 01:12:17.777525 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.16s 2025-04-14 01:12:17.777537 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.82s 2025-04-14 01:12:17.777554 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.49s 2025-04-14 01:12:20.826830 | orchestrator | 2025-04-14 01:12:17 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:20.826955 | orchestrator | 2025-04-14 01:12:17 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:20.826977 | orchestrator | 2025-04-14 01:12:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:20.827011 | orchestrator | 2025-04-14 01:12:20 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:20.827821 | orchestrator | 2025-04-14 01:12:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:20.832394 | orchestrator | 2025-04-14 01:12:20 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:20.833240 | orchestrator | 2025-04-14 01:12:20 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:20.834388 | orchestrator | 2025-04-14 01:12:20 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:23.877093 | orchestrator | 2025-04-14 01:12:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:23.877227 | orchestrator | 2025-04-14 01:12:23 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:23.878292 | orchestrator | 2025-04-14 01:12:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:23.880857 | orchestrator | 2025-04-14 01:12:23 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:23.882686 | orchestrator | 2025-04-14 01:12:23 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:23.884733 | orchestrator | 2025-04-14 01:12:23 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:23.884864 | orchestrator | 2025-04-14 01:12:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:26.936242 | orchestrator | 2025-04-14 01:12:26 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:26.936520 | orchestrator | 2025-04-14 01:12:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:26.939498 | orchestrator | 2025-04-14 01:12:26 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:26.942516 | orchestrator | 2025-04-14 01:12:26 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:26.944271 | orchestrator | 2025-04-14 01:12:26 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:26.944891 | orchestrator | 2025-04-14 01:12:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:29.991800 | orchestrator | 2025-04-14 01:12:29 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:29.992622 | orchestrator | 2025-04-14 01:12:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:29.992660 | orchestrator | 2025-04-14 01:12:29 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:29.994481 | orchestrator | 2025-04-14 01:12:29 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:29.996519 | orchestrator | 2025-04-14 01:12:29 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:33.053838 | orchestrator | 2025-04-14 01:12:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:33.053979 | orchestrator | 2025-04-14 01:12:33 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:33.055825 | orchestrator | 2025-04-14 01:12:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:33.056939 | orchestrator | 2025-04-14 01:12:33 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:33.059889 | orchestrator | 2025-04-14 01:12:33 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:33.060898 | orchestrator | 2025-04-14 01:12:33 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:33.062135 | orchestrator | 2025-04-14 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:36.107590 | orchestrator | 2025-04-14 01:12:36 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:36.112203 | orchestrator | 2025-04-14 01:12:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:36.116095 | orchestrator | 2025-04-14 01:12:36 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:36.117857 | orchestrator | 2025-04-14 01:12:36 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:36.120324 | orchestrator | 2025-04-14 01:12:36 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:39.167861 | orchestrator | 2025-04-14 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:39.168042 | orchestrator | 2025-04-14 01:12:39 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:39.168466 | orchestrator | 2025-04-14 01:12:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:39.170081 | orchestrator | 2025-04-14 01:12:39 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:39.171676 | orchestrator | 2025-04-14 01:12:39 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:39.172622 | orchestrator | 2025-04-14 01:12:39 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:42.223627 | orchestrator | 2025-04-14 01:12:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:42.223762 | orchestrator | 2025-04-14 01:12:42 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:42.224312 | orchestrator | 2025-04-14 01:12:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:42.225039 | orchestrator | 2025-04-14 01:12:42 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:42.226288 | orchestrator | 2025-04-14 01:12:42 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:42.228007 | orchestrator | 2025-04-14 01:12:42 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:42.228136 | orchestrator | 2025-04-14 01:12:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:45.279859 | orchestrator | 2025-04-14 01:12:45 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:45.280092 | orchestrator | 2025-04-14 01:12:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:45.282401 | orchestrator | 2025-04-14 01:12:45 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:45.283748 | orchestrator | 2025-04-14 01:12:45 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:45.285163 | orchestrator | 2025-04-14 01:12:45 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:48.326669 | orchestrator | 2025-04-14 01:12:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:48.326813 | orchestrator | 2025-04-14 01:12:48 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:48.328010 | orchestrator | 2025-04-14 01:12:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:48.329139 | orchestrator | 2025-04-14 01:12:48 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:48.330331 | orchestrator | 2025-04-14 01:12:48 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:48.331412 | orchestrator | 2025-04-14 01:12:48 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:51.386895 | orchestrator | 2025-04-14 01:12:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:51.387038 | orchestrator | 2025-04-14 01:12:51 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:51.388455 | orchestrator | 2025-04-14 01:12:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:51.390178 | orchestrator | 2025-04-14 01:12:51 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:51.391791 | orchestrator | 2025-04-14 01:12:51 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:51.393428 | orchestrator | 2025-04-14 01:12:51 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:54.445924 | orchestrator | 2025-04-14 01:12:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:54.446108 | orchestrator | 2025-04-14 01:12:54 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:54.446499 | orchestrator | 2025-04-14 01:12:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:54.449001 | orchestrator | 2025-04-14 01:12:54 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:54.450877 | orchestrator | 2025-04-14 01:12:54 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:54.455092 | orchestrator | 2025-04-14 01:12:54 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:12:57.516289 | orchestrator | 2025-04-14 01:12:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:12:57.516431 | orchestrator | 2025-04-14 01:12:57 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:12:57.517537 | orchestrator | 2025-04-14 01:12:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:12:57.519751 | orchestrator | 2025-04-14 01:12:57 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:12:57.520893 | orchestrator | 2025-04-14 01:12:57 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:12:57.523119 | orchestrator | 2025-04-14 01:12:57 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:13:00.572983 | orchestrator | 2025-04-14 01:12:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:00.573126 | orchestrator | 2025-04-14 01:13:00 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:13:00.575073 | orchestrator | 2025-04-14 01:13:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:00.577606 | orchestrator | 2025-04-14 01:13:00 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:00.580546 | orchestrator | 2025-04-14 01:13:00 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:00.581711 | orchestrator | 2025-04-14 01:13:00 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state STARTED 2025-04-14 01:13:03.636902 | orchestrator | 2025-04-14 01:13:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:03.637728 | orchestrator | 2025-04-14 01:13:03 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:13:03.643734 | orchestrator | 2025-04-14 01:13:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:03.643760 | orchestrator | 2025-04-14 01:13:03 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:03.645031 | orchestrator | 2025-04-14 01:13:03 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:03.646091 | orchestrator | 2025-04-14 01:13:03 | INFO  | Task 5d336fcb-0f84-4e10-a63c-516527d1d28c is in state SUCCESS 2025-04-14 01:13:03.646210 | orchestrator | 2025-04-14 01:13:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:06.697826 | orchestrator | 2025-04-14 01:13:06 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:13:06.698675 | orchestrator | 2025-04-14 01:13:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:06.700048 | orchestrator | 2025-04-14 01:13:06 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:06.701090 | orchestrator | 2025-04-14 01:13:06 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:09.752389 | orchestrator | 2025-04-14 01:13:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:09.752532 | orchestrator | 2025-04-14 01:13:09 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:13:09.754667 | orchestrator | 2025-04-14 01:13:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:09.757123 | orchestrator | 2025-04-14 01:13:09 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:09.759225 | orchestrator | 2025-04-14 01:13:09 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:09.759409 | orchestrator | 2025-04-14 01:13:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:12.807175 | orchestrator | 2025-04-14 01:13:12 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:13:12.807714 | orchestrator | 2025-04-14 01:13:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:12.814349 | orchestrator | 2025-04-14 01:13:12 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:12.815365 | orchestrator | 2025-04-14 01:13:12 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:15.862765 | orchestrator | 2025-04-14 01:13:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:15.862878 | orchestrator | 2025-04-14 01:13:15 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:13:15.866895 | orchestrator | 2025-04-14 01:13:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:15.868647 | orchestrator | 2025-04-14 01:13:15 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:15.870481 | orchestrator | 2025-04-14 01:13:15 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:15.872258 | orchestrator | 2025-04-14 01:13:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:18.927535 | orchestrator | 2025-04-14 01:13:18 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state STARTED 2025-04-14 01:13:18.929677 | orchestrator | 2025-04-14 01:13:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:18.933305 | orchestrator | 2025-04-14 01:13:18 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:18.935523 | orchestrator | 2025-04-14 01:13:18 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:21.977890 | orchestrator | 2025-04-14 01:13:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:21.978092 | orchestrator | 2025-04-14 01:13:21 | INFO  | Task b4ad264d-c890-48fe-aeb8-89e58e07e415 is in state SUCCESS 2025-04-14 01:13:21.978503 | orchestrator | 2025-04-14 01:13:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:21.979266 | orchestrator | 2025-04-14 01:13:21 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:21.980107 | orchestrator | 2025-04-14 01:13:21 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:25.037536 | orchestrator | 2025-04-14 01:13:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:25.037646 | orchestrator | 2025-04-14 01:13:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:25.039580 | orchestrator | 2025-04-14 01:13:25 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:25.042561 | orchestrator | 2025-04-14 01:13:25 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:28.094180 | orchestrator | 2025-04-14 01:13:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:28.094322 | orchestrator | 2025-04-14 01:13:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:28.095906 | orchestrator | 2025-04-14 01:13:28 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:28.098618 | orchestrator | 2025-04-14 01:13:28 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:31.151237 | orchestrator | 2025-04-14 01:13:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:31.151385 | orchestrator | 2025-04-14 01:13:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:31.152251 | orchestrator | 2025-04-14 01:13:31 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:31.153841 | orchestrator | 2025-04-14 01:13:31 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:34.209012 | orchestrator | 2025-04-14 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:34.209147 | orchestrator | 2025-04-14 01:13:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:34.209356 | orchestrator | 2025-04-14 01:13:34 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:34.210310 | orchestrator | 2025-04-14 01:13:34 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:37.257020 | orchestrator | 2025-04-14 01:13:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:37.257136 | orchestrator | 2025-04-14 01:13:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:37.258128 | orchestrator | 2025-04-14 01:13:37 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:37.259063 | orchestrator | 2025-04-14 01:13:37 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:40.316348 | orchestrator | 2025-04-14 01:13:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:40.316464 | orchestrator | 2025-04-14 01:13:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:40.318184 | orchestrator | 2025-04-14 01:13:40 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:40.319571 | orchestrator | 2025-04-14 01:13:40 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:40.320073 | orchestrator | 2025-04-14 01:13:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:43.360944 | orchestrator | 2025-04-14 01:13:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:43.365633 | orchestrator | 2025-04-14 01:13:43 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:43.368508 | orchestrator | 2025-04-14 01:13:43 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:46.425190 | orchestrator | 2025-04-14 01:13:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:46.425348 | orchestrator | 2025-04-14 01:13:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:46.426988 | orchestrator | 2025-04-14 01:13:46 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:46.430066 | orchestrator | 2025-04-14 01:13:46 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:49.482890 | orchestrator | 2025-04-14 01:13:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:49.483028 | orchestrator | 2025-04-14 01:13:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:49.486153 | orchestrator | 2025-04-14 01:13:49 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:52.536799 | orchestrator | 2025-04-14 01:13:49 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:52.536937 | orchestrator | 2025-04-14 01:13:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:52.536975 | orchestrator | 2025-04-14 01:13:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:52.539840 | orchestrator | 2025-04-14 01:13:52 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:52.542157 | orchestrator | 2025-04-14 01:13:52 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:55.594272 | orchestrator | 2025-04-14 01:13:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:55.594411 | orchestrator | 2025-04-14 01:13:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:55.595163 | orchestrator | 2025-04-14 01:13:55 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:55.597717 | orchestrator | 2025-04-14 01:13:55 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:58.651802 | orchestrator | 2025-04-14 01:13:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:13:58.651952 | orchestrator | 2025-04-14 01:13:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:13:58.652943 | orchestrator | 2025-04-14 01:13:58 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:13:58.654093 | orchestrator | 2025-04-14 01:13:58 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:13:58.654450 | orchestrator | 2025-04-14 01:13:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:01.698662 | orchestrator | 2025-04-14 01:14:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:01.700657 | orchestrator | 2025-04-14 01:14:01 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:14:01.702590 | orchestrator | 2025-04-14 01:14:01 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:04.759814 | orchestrator | 2025-04-14 01:14:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:04.759959 | orchestrator | 2025-04-14 01:14:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:04.761936 | orchestrator | 2025-04-14 01:14:04 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state STARTED 2025-04-14 01:14:04.763937 | orchestrator | 2025-04-14 01:14:04 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:07.819955 | orchestrator | 2025-04-14 01:14:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:07.820139 | orchestrator | 2025-04-14 01:14:07.820252 | orchestrator | 2025-04-14 01:14:07.820269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:14:07.820292 | orchestrator | 2025-04-14 01:14:07.820307 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:14:07.820322 | orchestrator | Monday 14 April 2025 01:12:07 +0000 (0:00:00.375) 0:00:00.375 ********** 2025-04-14 01:14:07.820360 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:14:07.820377 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:14:07.820391 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:14:07.820405 | orchestrator | 2025-04-14 01:14:07.820420 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:14:07.820434 | orchestrator | Monday 14 April 2025 01:12:08 +0000 (0:00:00.449) 0:00:00.824 ********** 2025-04-14 01:14:07.820448 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-04-14 01:14:07.820463 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-04-14 01:14:07.820477 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-04-14 01:14:07.820703 | orchestrator | 2025-04-14 01:14:07.820736 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-04-14 01:14:07.820762 | orchestrator | 2025-04-14 01:14:07.820787 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-14 01:14:07.820811 | orchestrator | Monday 14 April 2025 01:12:08 +0000 (0:00:00.339) 0:00:01.164 ********** 2025-04-14 01:14:07.820835 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:14:07.820853 | orchestrator | 2025-04-14 01:14:07.820867 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-04-14 01:14:07.820881 | orchestrator | Monday 14 April 2025 01:12:09 +0000 (0:00:00.816) 0:00:01.981 ********** 2025-04-14 01:14:07.820896 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-04-14 01:14:07.820910 | orchestrator | 2025-04-14 01:14:07.820932 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-04-14 01:14:07.820981 | orchestrator | Monday 14 April 2025 01:12:12 +0000 (0:00:03.349) 0:00:05.331 ********** 2025-04-14 01:14:07.821007 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-04-14 01:14:07.821030 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-04-14 01:14:07.821055 | orchestrator | 2025-04-14 01:14:07.821080 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-04-14 01:14:07.821104 | orchestrator | Monday 14 April 2025 01:12:19 +0000 (0:00:06.731) 0:00:12.063 ********** 2025-04-14 01:14:07.821129 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-14 01:14:07.821153 | orchestrator | 2025-04-14 01:14:07.821175 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-04-14 01:14:07.821197 | orchestrator | Monday 14 April 2025 01:12:22 +0000 (0:00:03.362) 0:00:15.425 ********** 2025-04-14 01:14:07.821221 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:14:07.821242 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-14 01:14:07.821266 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-04-14 01:14:07.821289 | orchestrator | 2025-04-14 01:14:07.821313 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-04-14 01:14:07.821337 | orchestrator | Monday 14 April 2025 01:12:31 +0000 (0:00:08.706) 0:00:24.131 ********** 2025-04-14 01:14:07.821362 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:14:07.821378 | orchestrator | 2025-04-14 01:14:07.821392 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-04-14 01:14:07.821406 | orchestrator | Monday 14 April 2025 01:12:34 +0000 (0:00:03.189) 0:00:27.320 ********** 2025-04-14 01:14:07.821421 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-14 01:14:07.821434 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-04-14 01:14:07.821448 | orchestrator | 2025-04-14 01:14:07.821463 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-04-14 01:14:07.821477 | orchestrator | Monday 14 April 2025 01:12:42 +0000 (0:00:07.762) 0:00:35.083 ********** 2025-04-14 01:14:07.821491 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-04-14 01:14:07.821520 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-04-14 01:14:07.821534 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-04-14 01:14:07.821547 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-04-14 01:14:07.821561 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-04-14 01:14:07.821575 | orchestrator | 2025-04-14 01:14:07.821589 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-04-14 01:14:07.821603 | orchestrator | Monday 14 April 2025 01:12:58 +0000 (0:00:15.662) 0:00:50.745 ********** 2025-04-14 01:14:07.821617 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:14:07.821631 | orchestrator | 2025-04-14 01:14:07.821670 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-04-14 01:14:07.821685 | orchestrator | Monday 14 April 2025 01:12:59 +0000 (0:00:00.805) 0:00:51.551 ********** 2025-04-14 01:14:07.821717 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-04-14 01:14:07.821736 | orchestrator | 2025-04-14 01:14:07.821750 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:14:07.821771 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-04-14 01:14:07.821788 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:14:07.821802 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:14:07.821816 | orchestrator | 2025-04-14 01:14:07.821830 | orchestrator | 2025-04-14 01:14:07.821844 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:14:07.821858 | orchestrator | Monday 14 April 2025 01:13:02 +0000 (0:00:03.266) 0:00:54.818 ********** 2025-04-14 01:14:07.821872 | orchestrator | =============================================================================== 2025-04-14 01:14:07.821886 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.66s 2025-04-14 01:14:07.821900 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.71s 2025-04-14 01:14:07.821914 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.76s 2025-04-14 01:14:07.821928 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.73s 2025-04-14 01:14:07.821943 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.36s 2025-04-14 01:14:07.821957 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.35s 2025-04-14 01:14:07.821971 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.27s 2025-04-14 01:14:07.821985 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.19s 2025-04-14 01:14:07.821999 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.82s 2025-04-14 01:14:07.822064 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.81s 2025-04-14 01:14:07.822083 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2025-04-14 01:14:07.822098 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2025-04-14 01:14:07.822112 | orchestrator | 2025-04-14 01:14:07.822126 | orchestrator | 2025-04-14 01:14:07.822145 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:14:07.822169 | orchestrator | 2025-04-14 01:14:07.822183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:14:07.822198 | orchestrator | Monday 14 April 2025 01:11:30 +0000 (0:00:00.285) 0:00:00.285 ********** 2025-04-14 01:14:07.822212 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:14:07.822226 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:14:07.822241 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:14:07.822255 | orchestrator | 2025-04-14 01:14:07.822269 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:14:07.822283 | orchestrator | Monday 14 April 2025 01:11:31 +0000 (0:00:00.484) 0:00:00.770 ********** 2025-04-14 01:14:07.822297 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-04-14 01:14:07.822311 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-04-14 01:14:07.822326 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-04-14 01:14:07.822340 | orchestrator | 2025-04-14 01:14:07.822354 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-04-14 01:14:07.822368 | orchestrator | 2025-04-14 01:14:07.822382 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-04-14 01:14:07.822396 | orchestrator | Monday 14 April 2025 01:11:32 +0000 (0:00:00.655) 0:00:01.425 ********** 2025-04-14 01:14:07.822410 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:14:07.822424 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:14:07.822440 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:14:07.822462 | orchestrator | 2025-04-14 01:14:07.822476 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:14:07.822491 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:14:07.822505 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:14:07.822519 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:14:07.822534 | orchestrator | 2025-04-14 01:14:07.822548 | orchestrator | 2025-04-14 01:14:07.822562 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:14:07.822576 | orchestrator | Monday 14 April 2025 01:13:19 +0000 (0:01:47.855) 0:01:49.281 ********** 2025-04-14 01:14:07.822590 | orchestrator | =============================================================================== 2025-04-14 01:14:07.822604 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 107.86s 2025-04-14 01:14:07.822618 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-04-14 01:14:07.822633 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2025-04-14 01:14:07.822706 | orchestrator | 2025-04-14 01:14:07.822725 | orchestrator | 2025-04-14 01:14:07.822739 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:14:07.822753 | orchestrator | 2025-04-14 01:14:07.822768 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:14:07.822790 | orchestrator | Monday 14 April 2025 01:12:19 +0000 (0:00:00.328) 0:00:00.328 ********** 2025-04-14 01:14:07.822806 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:14:07.822822 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:14:07.822836 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:14:07.822850 | orchestrator | 2025-04-14 01:14:07.822865 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:14:07.822879 | orchestrator | Monday 14 April 2025 01:12:19 +0000 (0:00:00.426) 0:00:00.754 ********** 2025-04-14 01:14:07.822893 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-04-14 01:14:07.822907 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-04-14 01:14:07.822921 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-04-14 01:14:07.822935 | orchestrator | 2025-04-14 01:14:07.822957 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-04-14 01:14:07.822971 | orchestrator | 2025-04-14 01:14:07.822986 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-14 01:14:07.822999 | orchestrator | Monday 14 April 2025 01:12:19 +0000 (0:00:00.288) 0:00:01.043 ********** 2025-04-14 01:14:07.823014 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:14:07.823028 | orchestrator | 2025-04-14 01:14:07.823042 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-04-14 01:14:07.823056 | orchestrator | Monday 14 April 2025 01:12:20 +0000 (0:00:00.779) 0:00:01.823 ********** 2025-04-14 01:14:07.823072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823122 | orchestrator | 2025-04-14 01:14:07.823135 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-04-14 01:14:07.823150 | orchestrator | Monday 14 April 2025 01:12:21 +0000 (0:00:00.932) 0:00:02.756 ********** 2025-04-14 01:14:07.823164 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-04-14 01:14:07.823183 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-04-14 01:14:07.823197 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:14:07.823209 | orchestrator | 2025-04-14 01:14:07.823222 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-04-14 01:14:07.823234 | orchestrator | Monday 14 April 2025 01:12:22 +0000 (0:00:00.568) 0:00:03.325 ********** 2025-04-14 01:14:07.823247 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:14:07.823259 | orchestrator | 2025-04-14 01:14:07.823272 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-04-14 01:14:07.823284 | orchestrator | Monday 14 April 2025 01:12:22 +0000 (0:00:00.617) 0:00:03.942 ********** 2025-04-14 01:14:07.823321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823367 | orchestrator | 2025-04-14 01:14:07.823380 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-04-14 01:14:07.823393 | orchestrator | Monday 14 April 2025 01:12:24 +0000 (0:00:01.581) 0:00:05.524 ********** 2025-04-14 01:14:07.823406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 01:14:07.823419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 01:14:07.823432 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:14:07.823444 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:14:07.823464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 01:14:07.823484 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:14:07.823497 | orchestrator | 2025-04-14 01:14:07.823509 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-04-14 01:14:07.823522 | orchestrator | Monday 14 April 2025 01:12:24 +0000 (0:00:00.575) 0:00:06.099 ********** 2025-04-14 01:14:07.823534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 01:14:07.823548 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:14:07.823560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 01:14:07.823573 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:14:07.823586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-04-14 01:14:07.823599 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:14:07.823612 | orchestrator | 2025-04-14 01:14:07.823624 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-04-14 01:14:07.823637 | orchestrator | Monday 14 April 2025 01:12:25 +0000 (0:00:00.747) 0:00:06.847 ********** 2025-04-14 01:14:07.823668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823728 | orchestrator | 2025-04-14 01:14:07.823741 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-04-14 01:14:07.823753 | orchestrator | Monday 14 April 2025 01:12:27 +0000 (0:00:01.437) 0:00:08.284 ********** 2025-04-14 01:14:07.823766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.823806 | orchestrator | 2025-04-14 01:14:07.823818 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-04-14 01:14:07.823831 | orchestrator | Monday 14 April 2025 01:12:28 +0000 (0:00:01.584) 0:00:09.869 ********** 2025-04-14 01:14:07.823849 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:14:07.823862 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:14:07.823874 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:14:07.823887 | orchestrator | 2025-04-14 01:14:07.823899 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-04-14 01:14:07.823912 | orchestrator | Monday 14 April 2025 01:12:28 +0000 (0:00:00.335) 0:00:10.205 ********** 2025-04-14 01:14:07.823924 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-14 01:14:07.823937 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-14 01:14:07.823950 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-04-14 01:14:07.823962 | orchestrator | 2025-04-14 01:14:07.823975 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-04-14 01:14:07.823988 | orchestrator | Monday 14 April 2025 01:12:30 +0000 (0:00:01.497) 0:00:11.703 ********** 2025-04-14 01:14:07.824001 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-14 01:14:07.824014 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-14 01:14:07.824026 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-04-14 01:14:07.824039 | orchestrator | 2025-04-14 01:14:07.824056 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-04-14 01:14:07.824069 | orchestrator | Monday 14 April 2025 01:12:31 +0000 (0:00:01.440) 0:00:13.143 ********** 2025-04-14 01:14:07.824082 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:14:07.824094 | orchestrator | 2025-04-14 01:14:07.824107 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-04-14 01:14:07.824119 | orchestrator | Monday 14 April 2025 01:12:32 +0000 (0:00:00.451) 0:00:13.595 ********** 2025-04-14 01:14:07.824132 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-04-14 01:14:07.824145 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-04-14 01:14:07.824157 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:14:07.824170 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:14:07.824182 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:14:07.824195 | orchestrator | 2025-04-14 01:14:07.824207 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-04-14 01:14:07.824220 | orchestrator | Monday 14 April 2025 01:12:33 +0000 (0:00:00.896) 0:00:14.491 ********** 2025-04-14 01:14:07.824232 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:14:07.824245 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:14:07.824257 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:14:07.824270 | orchestrator | 2025-04-14 01:14:07.824282 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-04-14 01:14:07.824295 | orchestrator | Monday 14 April 2025 01:12:33 +0000 (0:00:00.443) 0:00:14.935 ********** 2025-04-14 01:14:07.824307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1067030, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3433905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1067030, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3433905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1067030, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3433905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1067025, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3383904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1067025, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3383904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1067025, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3383904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1067022, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3343902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1067022, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3343902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1067022, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3343902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1067028, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3393903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1067028, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3393903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-detai2025-04-14 01:14:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:07.824533 | orchestrator | 2025-04-14 01:14:07 | INFO  | Task 83a87021-57a1-4c72-beb4-1f29caad0114 is in state SUCCESS 2025-04-14 01:14:07.824556 | orchestrator | ls.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1067028, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3393903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1067016, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3283901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1067016, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3283901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1067016, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3283901, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1067023, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3353903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1067023, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3353903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1067023, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3353903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1067027, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3393903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1067027, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3393903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1067027, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3393903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1067015, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3273902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1067015, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3273902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1067015, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3273902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1067010, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1067010, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1067010, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32239, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1067018, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1067018, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1067018, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32939, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1067012, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.824983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1067012, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1067012, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32539, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1067026, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3383904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1067026, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3383904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1067026, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3383904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1067020, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3303902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1067020, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3303902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1067020, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3303902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1067029, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3403904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1067029, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3403904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1067029, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3403904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1067014, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1067014, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1067014, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1067024, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3373902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1067024, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3373902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1067024, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3373902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1067011, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3243902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1067011, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3243902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1067011, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3243902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1067013, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1067013, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1067013, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.32639, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1067021, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3323903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1067021, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3323903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1067021, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3323903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1067040, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3633907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1067040, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3633907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1067040, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3633907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1067038, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3563907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1067038, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3563907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1067038, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3563907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1067046, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.370391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1067046, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.370391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1067046, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.370391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1067032, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3443904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1067032, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3443904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1067032, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3443904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1067053, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.375391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1067053, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.375391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1067053, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.375391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1067041, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3653908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1067041, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3653908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1067041, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3653908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1067042, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3663907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1067042, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3663907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1067042, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3663907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1067033, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3463905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1067033, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3463905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1067033, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3463905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1067039, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3583906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1067039, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3583906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1067039, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3583906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1067058, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.378391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1067058, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.378391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1067058, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.378391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1067043, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3683908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1067043, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3683908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1067035, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3503904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1067043, 'dev': 156, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1744589710.3683908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.825997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1067035, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3503904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1067034, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3473904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1067035, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3503904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1067034, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3473904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1067036, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3523905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1067034, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3473904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1067036, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3523905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1067037, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3563907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1067036, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3523905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1067037, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3563907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1067065, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.379391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1067037, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.3563907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1067065, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.379391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1067065, 'dev': 156, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1744589710.379391, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-04-14 01:14:07.826285 | orchestrator | 2025-04-14 01:14:07.826299 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-04-14 01:14:07.826312 | orchestrator | Monday 14 April 2025 01:13:07 +0000 (0:00:34.241) 0:00:49.176 ********** 2025-04-14 01:14:07.826324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.826337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.826351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-04-14 01:14:07.826363 | orchestrator | 2025-04-14 01:14:07.826376 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-04-14 01:14:07.826389 | orchestrator | Monday 14 April 2025 01:13:09 +0000 (0:00:01.077) 0:00:50.254 ********** 2025-04-14 01:14:07.826401 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:14:07.826415 | orchestrator | 2025-04-14 01:14:07.826427 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-04-14 01:14:07.826440 | orchestrator | Monday 14 April 2025 01:13:11 +0000 (0:00:02.495) 0:00:52.750 ********** 2025-04-14 01:14:07.826453 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:14:07.826471 | orchestrator | 2025-04-14 01:14:07.826484 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-14 01:14:07.826496 | orchestrator | Monday 14 April 2025 01:13:13 +0000 (0:00:02.238) 0:00:54.988 ********** 2025-04-14 01:14:07.826509 | orchestrator | 2025-04-14 01:14:07.826521 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-14 01:14:07.826538 | orchestrator | Monday 14 April 2025 01:13:13 +0000 (0:00:00.059) 0:00:55.048 ********** 2025-04-14 01:14:07.826551 | orchestrator | 2025-04-14 01:14:07.826563 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-04-14 01:14:07.826576 | orchestrator | Monday 14 April 2025 01:13:13 +0000 (0:00:00.068) 0:00:55.116 ********** 2025-04-14 01:14:07.826588 | orchestrator | 2025-04-14 01:14:07.826601 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-04-14 01:14:07.826613 | orchestrator | Monday 14 April 2025 01:13:14 +0000 (0:00:00.256) 0:00:55.373 ********** 2025-04-14 01:14:07.826626 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:14:07.826638 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:14:07.826706 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:14:07.826720 | orchestrator | 2025-04-14 01:14:07.826733 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-04-14 01:14:07.826745 | orchestrator | Monday 14 April 2025 01:13:15 +0000 (0:00:01.705) 0:00:57.078 ********** 2025-04-14 01:14:07.826758 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:14:07.826770 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:14:07.826782 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-04-14 01:14:07.826795 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-04-14 01:14:07.826808 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:14:07.826821 | orchestrator | 2025-04-14 01:14:07.826834 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-04-14 01:14:07.826852 | orchestrator | Monday 14 April 2025 01:13:42 +0000 (0:00:26.522) 0:01:23.601 ********** 2025-04-14 01:14:07.826865 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:14:07.826877 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:14:07.826890 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:14:07.826902 | orchestrator | 2025-04-14 01:14:07.826915 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-04-14 01:14:07.826927 | orchestrator | Monday 14 April 2025 01:14:00 +0000 (0:00:18.477) 0:01:42.078 ********** 2025-04-14 01:14:07.826937 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:14:07.826948 | orchestrator | 2025-04-14 01:14:07.826958 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-04-14 01:14:07.826968 | orchestrator | Monday 14 April 2025 01:14:03 +0000 (0:00:02.182) 0:01:44.261 ********** 2025-04-14 01:14:07.826978 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:14:07.826988 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:14:07.826999 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:14:07.827009 | orchestrator | 2025-04-14 01:14:07.827019 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-04-14 01:14:07.827029 | orchestrator | Monday 14 April 2025 01:14:03 +0000 (0:00:00.440) 0:01:44.702 ********** 2025-04-14 01:14:07.827040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-04-14 01:14:07.827051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-04-14 01:14:07.827069 | orchestrator | 2025-04-14 01:14:07.827079 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-04-14 01:14:07.827089 | orchestrator | Monday 14 April 2025 01:14:05 +0000 (0:00:02.462) 0:01:47.164 ********** 2025-04-14 01:14:07.827100 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:14:07.827110 | orchestrator | 2025-04-14 01:14:07.827120 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:14:07.827130 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:14:07.827142 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:14:07.827152 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-04-14 01:14:07.827162 | orchestrator | 2025-04-14 01:14:07.827172 | orchestrator | 2025-04-14 01:14:07.827183 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:14:07.827193 | orchestrator | Monday 14 April 2025 01:14:06 +0000 (0:00:00.397) 0:01:47.562 ********** 2025-04-14 01:14:07.827203 | orchestrator | =============================================================================== 2025-04-14 01:14:07.827213 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.24s 2025-04-14 01:14:07.827223 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.52s 2025-04-14 01:14:07.827233 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 18.48s 2025-04-14 01:14:07.827243 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.50s 2025-04-14 01:14:07.827254 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.46s 2025-04-14 01:14:07.827264 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.24s 2025-04-14 01:14:07.827274 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.18s 2025-04-14 01:14:07.827284 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.71s 2025-04-14 01:14:07.827294 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.59s 2025-04-14 01:14:07.827304 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.58s 2025-04-14 01:14:07.827319 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.50s 2025-04-14 01:14:07.827329 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.44s 2025-04-14 01:14:07.827339 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.44s 2025-04-14 01:14:07.827349 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.08s 2025-04-14 01:14:07.827359 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.93s 2025-04-14 01:14:07.827369 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.90s 2025-04-14 01:14:07.827380 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.78s 2025-04-14 01:14:07.827390 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.75s 2025-04-14 01:14:07.827400 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.62s 2025-04-14 01:14:07.827410 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.58s 2025-04-14 01:14:07.827424 | orchestrator | 2025-04-14 01:14:07 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:10.867763 | orchestrator | 2025-04-14 01:14:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:10.868043 | orchestrator | 2025-04-14 01:14:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:13.918373 | orchestrator | 2025-04-14 01:14:10 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:13.918500 | orchestrator | 2025-04-14 01:14:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:13.918532 | orchestrator | 2025-04-14 01:14:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:13.921118 | orchestrator | 2025-04-14 01:14:13 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:16.957472 | orchestrator | 2025-04-14 01:14:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:16.957847 | orchestrator | 2025-04-14 01:14:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:20.015523 | orchestrator | 2025-04-14 01:14:16 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:20.015657 | orchestrator | 2025-04-14 01:14:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:20.015691 | orchestrator | 2025-04-14 01:14:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:20.019183 | orchestrator | 2025-04-14 01:14:20 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:23.063878 | orchestrator | 2025-04-14 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:23.064032 | orchestrator | 2025-04-14 01:14:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:23.066071 | orchestrator | 2025-04-14 01:14:23 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:26.105813 | orchestrator | 2025-04-14 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:26.105989 | orchestrator | 2025-04-14 01:14:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:26.107564 | orchestrator | 2025-04-14 01:14:26 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:26.107906 | orchestrator | 2025-04-14 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:29.153563 | orchestrator | 2025-04-14 01:14:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:29.155001 | orchestrator | 2025-04-14 01:14:29 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:32.231896 | orchestrator | 2025-04-14 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:32.232035 | orchestrator | 2025-04-14 01:14:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:32.232366 | orchestrator | 2025-04-14 01:14:32 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:32.233221 | orchestrator | 2025-04-14 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:35.274135 | orchestrator | 2025-04-14 01:14:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:38.325239 | orchestrator | 2025-04-14 01:14:35 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:38.325357 | orchestrator | 2025-04-14 01:14:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:38.325395 | orchestrator | 2025-04-14 01:14:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:38.327952 | orchestrator | 2025-04-14 01:14:38 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:41.381077 | orchestrator | 2025-04-14 01:14:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:41.381233 | orchestrator | 2025-04-14 01:14:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:41.382541 | orchestrator | 2025-04-14 01:14:41 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:41.382583 | orchestrator | 2025-04-14 01:14:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:44.437992 | orchestrator | 2025-04-14 01:14:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:47.496440 | orchestrator | 2025-04-14 01:14:44 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:47.496569 | orchestrator | 2025-04-14 01:14:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:47.496606 | orchestrator | 2025-04-14 01:14:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:47.498111 | orchestrator | 2025-04-14 01:14:47 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:47.498471 | orchestrator | 2025-04-14 01:14:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:50.550844 | orchestrator | 2025-04-14 01:14:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:53.589195 | orchestrator | 2025-04-14 01:14:50 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:53.589338 | orchestrator | 2025-04-14 01:14:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:53.589376 | orchestrator | 2025-04-14 01:14:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:53.591714 | orchestrator | 2025-04-14 01:14:53 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:56.634480 | orchestrator | 2025-04-14 01:14:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:56.634621 | orchestrator | 2025-04-14 01:14:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:56.635400 | orchestrator | 2025-04-14 01:14:56 | INFO  | Task 83f1aa22-6e78-4c94-a2d6-0b1140720616 is in state STARTED 2025-04-14 01:14:56.637401 | orchestrator | 2025-04-14 01:14:56 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:14:59.693205 | orchestrator | 2025-04-14 01:14:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:14:59.693370 | orchestrator | 2025-04-14 01:14:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:14:59.694660 | orchestrator | 2025-04-14 01:14:59 | INFO  | Task 83f1aa22-6e78-4c94-a2d6-0b1140720616 is in state STARTED 2025-04-14 01:14:59.696426 | orchestrator | 2025-04-14 01:14:59 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:02.752306 | orchestrator | 2025-04-14 01:14:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:02.752431 | orchestrator | 2025-04-14 01:15:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:05.795209 | orchestrator | 2025-04-14 01:15:02 | INFO  | Task 83f1aa22-6e78-4c94-a2d6-0b1140720616 is in state STARTED 2025-04-14 01:15:05.795332 | orchestrator | 2025-04-14 01:15:02 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:05.795348 | orchestrator | 2025-04-14 01:15:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:05.795376 | orchestrator | 2025-04-14 01:15:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:05.801715 | orchestrator | 2025-04-14 01:15:05 | INFO  | Task 83f1aa22-6e78-4c94-a2d6-0b1140720616 is in state STARTED 2025-04-14 01:15:05.805093 | orchestrator | 2025-04-14 01:15:05 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:08.870664 | orchestrator | 2025-04-14 01:15:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:08.870797 | orchestrator | 2025-04-14 01:15:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:08.872013 | orchestrator | 2025-04-14 01:15:08 | INFO  | Task 83f1aa22-6e78-4c94-a2d6-0b1140720616 is in state SUCCESS 2025-04-14 01:15:08.874150 | orchestrator | 2025-04-14 01:15:08 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:11.921057 | orchestrator | 2025-04-14 01:15:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:11.921187 | orchestrator | 2025-04-14 01:15:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:11.924128 | orchestrator | 2025-04-14 01:15:11 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:14.969384 | orchestrator | 2025-04-14 01:15:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:14.969531 | orchestrator | 2025-04-14 01:15:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:14.970719 | orchestrator | 2025-04-14 01:15:14 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:18.024251 | orchestrator | 2025-04-14 01:15:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:18.024403 | orchestrator | 2025-04-14 01:15:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:21.073330 | orchestrator | 2025-04-14 01:15:18 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:21.073462 | orchestrator | 2025-04-14 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:21.073500 | orchestrator | 2025-04-14 01:15:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:24.123063 | orchestrator | 2025-04-14 01:15:21 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:24.123185 | orchestrator | 2025-04-14 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:24.123218 | orchestrator | 2025-04-14 01:15:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:24.125056 | orchestrator | 2025-04-14 01:15:24 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:24.125325 | orchestrator | 2025-04-14 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:27.170211 | orchestrator | 2025-04-14 01:15:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:27.172938 | orchestrator | 2025-04-14 01:15:27 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:30.219625 | orchestrator | 2025-04-14 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:30.219776 | orchestrator | 2025-04-14 01:15:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:30.221147 | orchestrator | 2025-04-14 01:15:30 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:33.276284 | orchestrator | 2025-04-14 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:33.276431 | orchestrator | 2025-04-14 01:15:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:33.279543 | orchestrator | 2025-04-14 01:15:33 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:33.279711 | orchestrator | 2025-04-14 01:15:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:36.352802 | orchestrator | 2025-04-14 01:15:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:36.353619 | orchestrator | 2025-04-14 01:15:36 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:39.406330 | orchestrator | 2025-04-14 01:15:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:39.406469 | orchestrator | 2025-04-14 01:15:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:39.408331 | orchestrator | 2025-04-14 01:15:39 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:42.450710 | orchestrator | 2025-04-14 01:15:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:42.450826 | orchestrator | 2025-04-14 01:15:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:42.450872 | orchestrator | 2025-04-14 01:15:42 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:42.450887 | orchestrator | 2025-04-14 01:15:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:45.500479 | orchestrator | 2025-04-14 01:15:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:45.502661 | orchestrator | 2025-04-14 01:15:45 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:48.552129 | orchestrator | 2025-04-14 01:15:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:48.552280 | orchestrator | 2025-04-14 01:15:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:48.555001 | orchestrator | 2025-04-14 01:15:48 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:51.601408 | orchestrator | 2025-04-14 01:15:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:51.601546 | orchestrator | 2025-04-14 01:15:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:51.604377 | orchestrator | 2025-04-14 01:15:51 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:54.657325 | orchestrator | 2025-04-14 01:15:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:54.657500 | orchestrator | 2025-04-14 01:15:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:54.658466 | orchestrator | 2025-04-14 01:15:54 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:15:54.658803 | orchestrator | 2025-04-14 01:15:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:15:57.736169 | orchestrator | 2025-04-14 01:15:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:15:57.737788 | orchestrator | 2025-04-14 01:15:57 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:00.791820 | orchestrator | 2025-04-14 01:15:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:00.791973 | orchestrator | 2025-04-14 01:16:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:00.794577 | orchestrator | 2025-04-14 01:16:00 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:03.847316 | orchestrator | 2025-04-14 01:16:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:03.847457 | orchestrator | 2025-04-14 01:16:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:03.849744 | orchestrator | 2025-04-14 01:16:03 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:03.850212 | orchestrator | 2025-04-14 01:16:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:06.900375 | orchestrator | 2025-04-14 01:16:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:06.901930 | orchestrator | 2025-04-14 01:16:06 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:09.952591 | orchestrator | 2025-04-14 01:16:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:09.952726 | orchestrator | 2025-04-14 01:16:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:09.954831 | orchestrator | 2025-04-14 01:16:09 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:13.002318 | orchestrator | 2025-04-14 01:16:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:13.002479 | orchestrator | 2025-04-14 01:16:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:13.002858 | orchestrator | 2025-04-14 01:16:13 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:16.046620 | orchestrator | 2025-04-14 01:16:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:16.046786 | orchestrator | 2025-04-14 01:16:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:16.049061 | orchestrator | 2025-04-14 01:16:16 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:19.094154 | orchestrator | 2025-04-14 01:16:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:19.094330 | orchestrator | 2025-04-14 01:16:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:19.095794 | orchestrator | 2025-04-14 01:16:19 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:19.096969 | orchestrator | 2025-04-14 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:22.140709 | orchestrator | 2025-04-14 01:16:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:22.142895 | orchestrator | 2025-04-14 01:16:22 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:25.180661 | orchestrator | 2025-04-14 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:25.180837 | orchestrator | 2025-04-14 01:16:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:25.182582 | orchestrator | 2025-04-14 01:16:25 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:28.229279 | orchestrator | 2025-04-14 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:28.229376 | orchestrator | 2025-04-14 01:16:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:28.229720 | orchestrator | 2025-04-14 01:16:28 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:28.229831 | orchestrator | 2025-04-14 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:31.283169 | orchestrator | 2025-04-14 01:16:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:31.285816 | orchestrator | 2025-04-14 01:16:31 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:34.335722 | orchestrator | 2025-04-14 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:34.335865 | orchestrator | 2025-04-14 01:16:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:34.336738 | orchestrator | 2025-04-14 01:16:34 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:37.381627 | orchestrator | 2025-04-14 01:16:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:37.381775 | orchestrator | 2025-04-14 01:16:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:37.383456 | orchestrator | 2025-04-14 01:16:37 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:40.427342 | orchestrator | 2025-04-14 01:16:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:40.427471 | orchestrator | 2025-04-14 01:16:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:40.428523 | orchestrator | 2025-04-14 01:16:40 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:43.468897 | orchestrator | 2025-04-14 01:16:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:43.469045 | orchestrator | 2025-04-14 01:16:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:43.469489 | orchestrator | 2025-04-14 01:16:43 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:46.517536 | orchestrator | 2025-04-14 01:16:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:46.517659 | orchestrator | 2025-04-14 01:16:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:46.519866 | orchestrator | 2025-04-14 01:16:46 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:49.563049 | orchestrator | 2025-04-14 01:16:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:49.563193 | orchestrator | 2025-04-14 01:16:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:49.564738 | orchestrator | 2025-04-14 01:16:49 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:52.616170 | orchestrator | 2025-04-14 01:16:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:52.616310 | orchestrator | 2025-04-14 01:16:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:52.618356 | orchestrator | 2025-04-14 01:16:52 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:55.667606 | orchestrator | 2025-04-14 01:16:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:55.667760 | orchestrator | 2025-04-14 01:16:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:55.668540 | orchestrator | 2025-04-14 01:16:55 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:16:55.668757 | orchestrator | 2025-04-14 01:16:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:16:58.718535 | orchestrator | 2025-04-14 01:16:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:16:58.721426 | orchestrator | 2025-04-14 01:16:58 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:01.766103 | orchestrator | 2025-04-14 01:16:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:01.766271 | orchestrator | 2025-04-14 01:17:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:01.766417 | orchestrator | 2025-04-14 01:17:01 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:04.810998 | orchestrator | 2025-04-14 01:17:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:04.811176 | orchestrator | 2025-04-14 01:17:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:04.813576 | orchestrator | 2025-04-14 01:17:04 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:07.843773 | orchestrator | 2025-04-14 01:17:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:07.843920 | orchestrator | 2025-04-14 01:17:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:07.844287 | orchestrator | 2025-04-14 01:17:07 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:10.890245 | orchestrator | 2025-04-14 01:17:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:10.890473 | orchestrator | 2025-04-14 01:17:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:10.890580 | orchestrator | 2025-04-14 01:17:10 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:13.916097 | orchestrator | 2025-04-14 01:17:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:13.916242 | orchestrator | 2025-04-14 01:17:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:13.917390 | orchestrator | 2025-04-14 01:17:13 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:13.917779 | orchestrator | 2025-04-14 01:17:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:16.945898 | orchestrator | 2025-04-14 01:17:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:20.000722 | orchestrator | 2025-04-14 01:17:16 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:20.000853 | orchestrator | 2025-04-14 01:17:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:20.000892 | orchestrator | 2025-04-14 01:17:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:20.002738 | orchestrator | 2025-04-14 01:17:20 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:23.060238 | orchestrator | 2025-04-14 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:23.060372 | orchestrator | 2025-04-14 01:17:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:23.061393 | orchestrator | 2025-04-14 01:17:23 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:26.109268 | orchestrator | 2025-04-14 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:26.109426 | orchestrator | 2025-04-14 01:17:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:26.110397 | orchestrator | 2025-04-14 01:17:26 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:26.110594 | orchestrator | 2025-04-14 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:29.156076 | orchestrator | 2025-04-14 01:17:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:29.156925 | orchestrator | 2025-04-14 01:17:29 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:29.157064 | orchestrator | 2025-04-14 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:32.218228 | orchestrator | 2025-04-14 01:17:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:32.219200 | orchestrator | 2025-04-14 01:17:32 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:35.271948 | orchestrator | 2025-04-14 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:35.272091 | orchestrator | 2025-04-14 01:17:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:35.273123 | orchestrator | 2025-04-14 01:17:35 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:38.322912 | orchestrator | 2025-04-14 01:17:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:38.323057 | orchestrator | 2025-04-14 01:17:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:38.323979 | orchestrator | 2025-04-14 01:17:38 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:41.377899 | orchestrator | 2025-04-14 01:17:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:41.378141 | orchestrator | 2025-04-14 01:17:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:41.379160 | orchestrator | 2025-04-14 01:17:41 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:44.440884 | orchestrator | 2025-04-14 01:17:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:44.441035 | orchestrator | 2025-04-14 01:17:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:44.441546 | orchestrator | 2025-04-14 01:17:44 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:44.441679 | orchestrator | 2025-04-14 01:17:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:47.491876 | orchestrator | 2025-04-14 01:17:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:47.493589 | orchestrator | 2025-04-14 01:17:47 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state STARTED 2025-04-14 01:17:50.552692 | orchestrator | 2025-04-14 01:17:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:50.552842 | orchestrator | 2025-04-14 01:17:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:50.558257 | orchestrator | 2025-04-14 01:17:50 | INFO  | Task 6ae5467d-e75b-40af-ad25-94a93a6fc412 is in state SUCCESS 2025-04-14 01:17:50.559763 | orchestrator | 2025-04-14 01:17:50.559809 | orchestrator | None 2025-04-14 01:17:50.559825 | orchestrator | 2025-04-14 01:17:50.559839 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-04-14 01:17:50.559853 | orchestrator | 2025-04-14 01:17:50.559868 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-04-14 01:17:50.559883 | orchestrator | Monday 14 April 2025 01:09:10 +0000 (0:00:00.374) 0:00:00.374 ********** 2025-04-14 01:17:50.559897 | orchestrator | changed: [testbed-manager] 2025-04-14 01:17:50.559913 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.559927 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:17:50.559941 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:17:50.559955 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.559969 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.559983 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.559996 | orchestrator | 2025-04-14 01:17:50.560010 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-04-14 01:17:50.560024 | orchestrator | Monday 14 April 2025 01:09:11 +0000 (0:00:01.219) 0:00:01.594 ********** 2025-04-14 01:17:50.560038 | orchestrator | changed: [testbed-manager] 2025-04-14 01:17:50.560052 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.560066 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:17:50.560080 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:17:50.560094 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.560133 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.560148 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.560264 | orchestrator | 2025-04-14 01:17:50.560281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-04-14 01:17:50.560295 | orchestrator | Monday 14 April 2025 01:09:12 +0000 (0:00:01.438) 0:00:03.032 ********** 2025-04-14 01:17:50.560309 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-04-14 01:17:50.560324 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-04-14 01:17:50.560338 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-04-14 01:17:50.560355 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-04-14 01:17:50.560371 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-04-14 01:17:50.560386 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-04-14 01:17:50.560401 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-04-14 01:17:50.560416 | orchestrator | 2025-04-14 01:17:50.560431 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-04-14 01:17:50.560463 | orchestrator | 2025-04-14 01:17:50.560479 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-14 01:17:50.560495 | orchestrator | Monday 14 April 2025 01:09:13 +0000 (0:00:01.131) 0:00:04.163 ********** 2025-04-14 01:17:50.560626 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:17:50.560643 | orchestrator | 2025-04-14 01:17:50.560658 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-04-14 01:17:50.560674 | orchestrator | Monday 14 April 2025 01:09:14 +0000 (0:00:00.783) 0:00:04.947 ********** 2025-04-14 01:17:50.560690 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-04-14 01:17:50.560706 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-04-14 01:17:50.560740 | orchestrator | 2025-04-14 01:17:50.560755 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-04-14 01:17:50.560769 | orchestrator | Monday 14 April 2025 01:09:19 +0000 (0:00:04.368) 0:00:09.316 ********** 2025-04-14 01:17:50.560783 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-14 01:17:50.560808 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-04-14 01:17:50.560822 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.560857 | orchestrator | 2025-04-14 01:17:50.560872 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-14 01:17:50.560887 | orchestrator | Monday 14 April 2025 01:09:24 +0000 (0:00:05.100) 0:00:14.416 ********** 2025-04-14 01:17:50.560901 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.560915 | orchestrator | 2025-04-14 01:17:50.560929 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-04-14 01:17:50.560943 | orchestrator | Monday 14 April 2025 01:09:25 +0000 (0:00:00.923) 0:00:15.340 ********** 2025-04-14 01:17:50.560957 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.560970 | orchestrator | 2025-04-14 01:17:50.560984 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-04-14 01:17:50.560998 | orchestrator | Monday 14 April 2025 01:09:26 +0000 (0:00:01.783) 0:00:17.123 ********** 2025-04-14 01:17:50.561012 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.561026 | orchestrator | 2025-04-14 01:17:50.561040 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-14 01:17:50.561054 | orchestrator | Monday 14 April 2025 01:09:33 +0000 (0:00:06.391) 0:00:23.515 ********** 2025-04-14 01:17:50.561067 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.561081 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.561095 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.561109 | orchestrator | 2025-04-14 01:17:50.561129 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-14 01:17:50.561144 | orchestrator | Monday 14 April 2025 01:09:34 +0000 (0:00:00.899) 0:00:24.414 ********** 2025-04-14 01:17:50.561168 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:17:50.561183 | orchestrator | 2025-04-14 01:17:50.561196 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-04-14 01:17:50.561211 | orchestrator | Monday 14 April 2025 01:10:03 +0000 (0:00:29.720) 0:00:54.135 ********** 2025-04-14 01:17:50.561225 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.561239 | orchestrator | 2025-04-14 01:17:50.561253 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-14 01:17:50.561267 | orchestrator | Monday 14 April 2025 01:10:18 +0000 (0:00:14.681) 0:01:08.817 ********** 2025-04-14 01:17:50.561281 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:17:50.561295 | orchestrator | 2025-04-14 01:17:50.561309 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-14 01:17:50.561323 | orchestrator | Monday 14 April 2025 01:10:29 +0000 (0:00:11.289) 0:01:20.106 ********** 2025-04-14 01:17:50.561348 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:17:50.561363 | orchestrator | 2025-04-14 01:17:50.561377 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-04-14 01:17:50.561391 | orchestrator | Monday 14 April 2025 01:10:31 +0000 (0:00:01.615) 0:01:21.722 ********** 2025-04-14 01:17:50.561405 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.561419 | orchestrator | 2025-04-14 01:17:50.561434 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-14 01:17:50.561447 | orchestrator | Monday 14 April 2025 01:10:32 +0000 (0:00:00.711) 0:01:22.434 ********** 2025-04-14 01:17:50.561461 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:17:50.561476 | orchestrator | 2025-04-14 01:17:50.561490 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-04-14 01:17:50.561526 | orchestrator | Monday 14 April 2025 01:10:33 +0000 (0:00:00.982) 0:01:23.416 ********** 2025-04-14 01:17:50.561542 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:17:50.561556 | orchestrator | 2025-04-14 01:17:50.561570 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-14 01:17:50.561585 | orchestrator | Monday 14 April 2025 01:10:49 +0000 (0:00:16.321) 0:01:39.738 ********** 2025-04-14 01:17:50.561599 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.561613 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.561627 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.561642 | orchestrator | 2025-04-14 01:17:50.561656 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-04-14 01:17:50.561670 | orchestrator | 2025-04-14 01:17:50.561684 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-04-14 01:17:50.561698 | orchestrator | Monday 14 April 2025 01:10:49 +0000 (0:00:00.348) 0:01:40.087 ********** 2025-04-14 01:17:50.561712 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:17:50.561726 | orchestrator | 2025-04-14 01:17:50.561740 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-04-14 01:17:50.561754 | orchestrator | Monday 14 April 2025 01:10:51 +0000 (0:00:01.469) 0:01:41.557 ********** 2025-04-14 01:17:50.561768 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.561782 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.561797 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.561811 | orchestrator | 2025-04-14 01:17:50.561825 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-04-14 01:17:50.561839 | orchestrator | Monday 14 April 2025 01:10:53 +0000 (0:00:02.620) 0:01:44.178 ********** 2025-04-14 01:17:50.561853 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.561867 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.561881 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.561895 | orchestrator | 2025-04-14 01:17:50.561909 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-14 01:17:50.561923 | orchestrator | Monday 14 April 2025 01:10:56 +0000 (0:00:02.354) 0:01:46.533 ********** 2025-04-14 01:17:50.561944 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.561958 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.561972 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.561986 | orchestrator | 2025-04-14 01:17:50.562000 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-14 01:17:50.562059 | orchestrator | Monday 14 April 2025 01:10:57 +0000 (0:00:00.697) 0:01:47.231 ********** 2025-04-14 01:17:50.562077 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-14 01:17:50.562091 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562106 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-14 01:17:50.562120 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562134 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-04-14 01:17:50.562148 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-04-14 01:17:50.562162 | orchestrator | 2025-04-14 01:17:50.562176 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-04-14 01:17:50.562190 | orchestrator | Monday 14 April 2025 01:11:05 +0000 (0:00:08.907) 0:01:56.139 ********** 2025-04-14 01:17:50.562204 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.562219 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562233 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562247 | orchestrator | 2025-04-14 01:17:50.562262 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-04-14 01:17:50.562276 | orchestrator | Monday 14 April 2025 01:11:06 +0000 (0:00:00.358) 0:01:56.497 ********** 2025-04-14 01:17:50.562290 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-04-14 01:17:50.562310 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.562324 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-04-14 01:17:50.562338 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562352 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-04-14 01:17:50.562366 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562381 | orchestrator | 2025-04-14 01:17:50.562395 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-14 01:17:50.562409 | orchestrator | Monday 14 April 2025 01:11:07 +0000 (0:00:00.992) 0:01:57.490 ********** 2025-04-14 01:17:50.562423 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562437 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562451 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.562465 | orchestrator | 2025-04-14 01:17:50.562479 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-04-14 01:17:50.562493 | orchestrator | Monday 14 April 2025 01:11:07 +0000 (0:00:00.514) 0:01:58.004 ********** 2025-04-14 01:17:50.562555 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562571 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562585 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.562599 | orchestrator | 2025-04-14 01:17:50.562613 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-04-14 01:17:50.562627 | orchestrator | Monday 14 April 2025 01:11:08 +0000 (0:00:01.005) 0:01:59.010 ********** 2025-04-14 01:17:50.562641 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562664 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562679 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.562693 | orchestrator | 2025-04-14 01:17:50.562707 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-04-14 01:17:50.562721 | orchestrator | Monday 14 April 2025 01:11:11 +0000 (0:00:02.393) 0:02:01.403 ********** 2025-04-14 01:17:50.562735 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562749 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562763 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:17:50.562777 | orchestrator | 2025-04-14 01:17:50.562792 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-14 01:17:50.562805 | orchestrator | Monday 14 April 2025 01:11:32 +0000 (0:00:20.877) 0:02:22.281 ********** 2025-04-14 01:17:50.562828 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562842 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562856 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:17:50.562870 | orchestrator | 2025-04-14 01:17:50.562884 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-14 01:17:50.562898 | orchestrator | Monday 14 April 2025 01:11:42 +0000 (0:00:10.253) 0:02:32.534 ********** 2025-04-14 01:17:50.562912 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:17:50.562926 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.562946 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.562960 | orchestrator | 2025-04-14 01:17:50.562974 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-04-14 01:17:50.562988 | orchestrator | Monday 14 April 2025 01:11:44 +0000 (0:00:01.734) 0:02:34.268 ********** 2025-04-14 01:17:50.563002 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.563017 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.563031 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.563045 | orchestrator | 2025-04-14 01:17:50.563059 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-04-14 01:17:50.563073 | orchestrator | Monday 14 April 2025 01:11:55 +0000 (0:00:11.326) 0:02:45.595 ********** 2025-04-14 01:17:50.563087 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.563101 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.563115 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.563129 | orchestrator | 2025-04-14 01:17:50.563143 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-04-14 01:17:50.563157 | orchestrator | Monday 14 April 2025 01:11:57 +0000 (0:00:02.017) 0:02:47.613 ********** 2025-04-14 01:17:50.563171 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.563185 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.563199 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.563213 | orchestrator | 2025-04-14 01:17:50.563227 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-04-14 01:17:50.563242 | orchestrator | 2025-04-14 01:17:50.563256 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-14 01:17:50.563270 | orchestrator | Monday 14 April 2025 01:11:57 +0000 (0:00:00.594) 0:02:48.207 ********** 2025-04-14 01:17:50.563284 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:17:50.563299 | orchestrator | 2025-04-14 01:17:50.563313 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-04-14 01:17:50.563327 | orchestrator | Monday 14 April 2025 01:11:58 +0000 (0:00:00.900) 0:02:49.108 ********** 2025-04-14 01:17:50.563341 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-04-14 01:17:50.563355 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-04-14 01:17:50.563370 | orchestrator | 2025-04-14 01:17:50.563383 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-04-14 01:17:50.563397 | orchestrator | Monday 14 April 2025 01:12:02 +0000 (0:00:03.384) 0:02:52.493 ********** 2025-04-14 01:17:50.563412 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-04-14 01:17:50.563427 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-04-14 01:17:50.563441 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-04-14 01:17:50.563456 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-04-14 01:17:50.563470 | orchestrator | 2025-04-14 01:17:50.563484 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-04-14 01:17:50.563498 | orchestrator | Monday 14 April 2025 01:12:08 +0000 (0:00:06.528) 0:02:59.021 ********** 2025-04-14 01:17:50.563538 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-04-14 01:17:50.563553 | orchestrator | 2025-04-14 01:17:50.563567 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-04-14 01:17:50.563581 | orchestrator | Monday 14 April 2025 01:12:12 +0000 (0:00:03.342) 0:03:02.364 ********** 2025-04-14 01:17:50.563596 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-04-14 01:17:50.563610 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-04-14 01:17:50.563623 | orchestrator | 2025-04-14 01:17:50.563637 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-04-14 01:17:50.563651 | orchestrator | Monday 14 April 2025 01:12:16 +0000 (0:00:04.098) 0:03:06.462 ********** 2025-04-14 01:17:50.563665 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-04-14 01:17:50.563679 | orchestrator | 2025-04-14 01:17:50.563693 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-04-14 01:17:50.563712 | orchestrator | Monday 14 April 2025 01:12:19 +0000 (0:00:03.187) 0:03:09.649 ********** 2025-04-14 01:17:50.563727 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-04-14 01:17:50.563741 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-04-14 01:17:50.563755 | orchestrator | 2025-04-14 01:17:50.563769 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-04-14 01:17:50.563789 | orchestrator | Monday 14 April 2025 01:12:27 +0000 (0:00:08.267) 0:03:17.917 ********** 2025-04-14 01:17:50.563839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.563859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.563883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.563907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.563936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.563951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.563966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.563981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.564003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.564018 | orchestrator | 2025-04-14 01:17:50.564032 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-04-14 01:17:50.564046 | orchestrator | Monday 14 April 2025 01:12:29 +0000 (0:00:01.663) 0:03:19.581 ********** 2025-04-14 01:17:50.564060 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.564074 | orchestrator | 2025-04-14 01:17:50.564088 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-04-14 01:17:50.564102 | orchestrator | Monday 14 April 2025 01:12:29 +0000 (0:00:00.208) 0:03:19.790 ********** 2025-04-14 01:17:50.564116 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.564130 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.564144 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.564158 | orchestrator | 2025-04-14 01:17:50.564172 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-04-14 01:17:50.564186 | orchestrator | Monday 14 April 2025 01:12:30 +0000 (0:00:00.452) 0:03:20.243 ********** 2025-04-14 01:17:50.564200 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-04-14 01:17:50.564214 | orchestrator | 2025-04-14 01:17:50.564234 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-04-14 01:17:50.564249 | orchestrator | Monday 14 April 2025 01:12:30 +0000 (0:00:00.385) 0:03:20.629 ********** 2025-04-14 01:17:50.564263 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.564277 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.564291 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.564305 | orchestrator | 2025-04-14 01:17:50.564319 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-04-14 01:17:50.564333 | orchestrator | Monday 14 April 2025 01:12:30 +0000 (0:00:00.305) 0:03:20.934 ********** 2025-04-14 01:17:50.564347 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:17:50.564361 | orchestrator | 2025-04-14 01:17:50.564375 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-14 01:17:50.564389 | orchestrator | Monday 14 April 2025 01:12:31 +0000 (0:00:00.933) 0:03:21.868 ********** 2025-04-14 01:17:50.564403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.564443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.564467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.564482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.564557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.564586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.564601 | orchestrator | 2025-04-14 01:17:50.564615 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-14 01:17:50.564630 | orchestrator | Monday 14 April 2025 01:12:34 +0000 (0:00:02.512) 0:03:24.380 ********** 2025-04-14 01:17:50.564644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.564659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.564680 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.565191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.565227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.565242 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.565256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.565270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.565284 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.565298 | orchestrator | 2025-04-14 01:17:50.565312 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-14 01:17:50.565330 | orchestrator | Monday 14 April 2025 01:12:34 +0000 (0:00:00.757) 0:03:25.138 ********** 2025-04-14 01:17:50.565363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.565385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.565400 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.565415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.565430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.565576 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.566211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.566616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.566637 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.566651 | orchestrator | 2025-04-14 01:17:50.566665 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-04-14 01:17:50.566678 | orchestrator | Monday 14 April 2025 01:12:36 +0000 (0:00:01.284) 0:03:26.422 ********** 2025-04-14 01:17:50.566693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.566708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.566764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.567336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.567357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.567371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.567385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.567732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.567793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.567817 | orchestrator | 2025-04-14 01:17:50.567831 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-04-14 01:17:50.567845 | orchestrator | Monday 14 April 2025 01:12:38 +0000 (0:00:02.717) 0:03:29.139 ********** 2025-04-14 01:17:50.567860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.567875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.567984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.568014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.568029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.568057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.568163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568191 | orchestrator | 2025-04-14 01:17:50.568206 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-04-14 01:17:50.568220 | orchestrator | Monday 14 April 2025 01:12:46 +0000 (0:00:07.258) 0:03:36.397 ********** 2025-04-14 01:17:50.568234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.568248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568276 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.568290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.568390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-04-14 01:17:50.568412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568454 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.568480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568571 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.568590 | orchestrator | 2025-04-14 01:17:50.568611 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-04-14 01:17:50.568625 | orchestrator | Monday 14 April 2025 01:12:47 +0000 (0:00:00.860) 0:03:37.257 ********** 2025-04-14 01:17:50.568637 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.568650 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:17:50.568663 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:17:50.568675 | orchestrator | 2025-04-14 01:17:50.568688 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-04-14 01:17:50.568700 | orchestrator | Monday 14 April 2025 01:12:48 +0000 (0:00:01.777) 0:03:39.035 ********** 2025-04-14 01:17:50.568783 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.568798 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.568808 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.568819 | orchestrator | 2025-04-14 01:17:50.568829 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-04-14 01:17:50.568839 | orchestrator | Monday 14 April 2025 01:12:49 +0000 (0:00:00.492) 0:03:39.527 ********** 2025-04-14 01:17:50.568850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.568862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.568887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-04-14 01:17:50.568962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.568978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.568989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.569000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.569010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.569021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.569039 | orchestrator | 2025-04-14 01:17:50.569050 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-14 01:17:50.569060 | orchestrator | Monday 14 April 2025 01:12:51 +0000 (0:00:02.155) 0:03:41.683 ********** 2025-04-14 01:17:50.569070 | orchestrator | 2025-04-14 01:17:50.569081 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-14 01:17:50.569091 | orchestrator | Monday 14 April 2025 01:12:51 +0000 (0:00:00.285) 0:03:41.969 ********** 2025-04-14 01:17:50.569101 | orchestrator | 2025-04-14 01:17:50.569112 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-04-14 01:17:50.569122 | orchestrator | Monday 14 April 2025 01:12:51 +0000 (0:00:00.109) 0:03:42.078 ********** 2025-04-14 01:17:50.569132 | orchestrator | 2025-04-14 01:17:50.569206 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-04-14 01:17:50.569223 | orchestrator | Monday 14 April 2025 01:12:52 +0000 (0:00:00.288) 0:03:42.367 ********** 2025-04-14 01:17:50.569233 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.569243 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:17:50.569253 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:17:50.569264 | orchestrator | 2025-04-14 01:17:50.569274 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-04-14 01:17:50.569284 | orchestrator | Monday 14 April 2025 01:13:11 +0000 (0:00:19.503) 0:04:01.871 ********** 2025-04-14 01:17:50.569294 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:17:50.569304 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.569314 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:17:50.569324 | orchestrator | 2025-04-14 01:17:50.569335 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-04-14 01:17:50.569345 | orchestrator | 2025-04-14 01:17:50.569355 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-14 01:17:50.569365 | orchestrator | Monday 14 April 2025 01:13:21 +0000 (0:00:10.274) 0:04:12.146 ********** 2025-04-14 01:17:50.569376 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:17:50.569387 | orchestrator | 2025-04-14 01:17:50.569397 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-14 01:17:50.569407 | orchestrator | Monday 14 April 2025 01:13:23 +0000 (0:00:01.420) 0:04:13.566 ********** 2025-04-14 01:17:50.569418 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.569540 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.569554 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.569564 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.569574 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.569584 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.569594 | orchestrator | 2025-04-14 01:17:50.569604 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-04-14 01:17:50.569615 | orchestrator | Monday 14 April 2025 01:13:24 +0000 (0:00:00.730) 0:04:14.297 ********** 2025-04-14 01:17:50.569625 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.569635 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.569645 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.569655 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:17:50.569665 | orchestrator | 2025-04-14 01:17:50.569676 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-04-14 01:17:50.569694 | orchestrator | Monday 14 April 2025 01:13:25 +0000 (0:00:01.303) 0:04:15.600 ********** 2025-04-14 01:17:50.569705 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-04-14 01:17:50.569715 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-04-14 01:17:50.569725 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-04-14 01:17:50.569735 | orchestrator | 2025-04-14 01:17:50.569746 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-04-14 01:17:50.569756 | orchestrator | Monday 14 April 2025 01:13:26 +0000 (0:00:00.649) 0:04:16.250 ********** 2025-04-14 01:17:50.569766 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-04-14 01:17:50.569777 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-04-14 01:17:50.569787 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-04-14 01:17:50.569797 | orchestrator | 2025-04-14 01:17:50.569807 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-04-14 01:17:50.569818 | orchestrator | Monday 14 April 2025 01:13:27 +0000 (0:00:01.353) 0:04:17.604 ********** 2025-04-14 01:17:50.569830 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-04-14 01:17:50.569841 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.569852 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-04-14 01:17:50.569863 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.569875 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-04-14 01:17:50.569886 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.569903 | orchestrator | 2025-04-14 01:17:50.569915 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-04-14 01:17:50.569926 | orchestrator | Monday 14 April 2025 01:13:28 +0000 (0:00:00.848) 0:04:18.452 ********** 2025-04-14 01:17:50.569938 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-14 01:17:50.569949 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-14 01:17:50.569960 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.569972 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-14 01:17:50.569983 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-14 01:17:50.569994 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-14 01:17:50.570005 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-14 01:17:50.570044 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.570060 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-04-14 01:17:50.570072 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-04-14 01:17:50.570083 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.570095 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-14 01:17:50.570106 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-14 01:17:50.570118 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-04-14 01:17:50.570129 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-04-14 01:17:50.570140 | orchestrator | 2025-04-14 01:17:50.570223 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-04-14 01:17:50.570238 | orchestrator | Monday 14 April 2025 01:13:30 +0000 (0:00:01.913) 0:04:20.366 ********** 2025-04-14 01:17:50.570249 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.570259 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.570269 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.570279 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.570289 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.570300 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.570321 | orchestrator | 2025-04-14 01:17:50.570332 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-04-14 01:17:50.570342 | orchestrator | Monday 14 April 2025 01:13:31 +0000 (0:00:01.171) 0:04:21.537 ********** 2025-04-14 01:17:50.570352 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.570363 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.570373 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.570383 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.570393 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.570403 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.570413 | orchestrator | 2025-04-14 01:17:50.570423 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-04-14 01:17:50.570433 | orchestrator | Monday 14 April 2025 01:13:33 +0000 (0:00:01.865) 0:04:23.403 ********** 2025-04-14 01:17:50.570445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.570472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.570485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.570571 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.570595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.570618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.570630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.570642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.570653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.570716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.570739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.570752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.570764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.570786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.570798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.570810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.570873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.570895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.570907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.570919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.570941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.570969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.570982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.571070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.571082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.571093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.571105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.571147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.571234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.571610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.571634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.571688 | orchestrator | 2025-04-14 01:17:50.571700 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-14 01:17:50.571711 | orchestrator | Monday 14 April 2025 01:13:35 +0000 (0:00:02.678) 0:04:26.082 ********** 2025-04-14 01:17:50.571724 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-04-14 01:17:50.571736 | orchestrator | 2025-04-14 01:17:50.571747 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-04-14 01:17:50.571757 | orchestrator | Monday 14 April 2025 01:13:37 +0000 (0:00:01.521) 0:04:27.603 ********** 2025-04-14 01:17:50.571828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.571987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.572005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.572015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.572029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.572039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.572091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.572112 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.572122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.572137 | orchestrator | 2025-04-14 01:17:50.572146 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-04-14 01:17:50.572155 | orchestrator | Monday 14 April 2025 01:13:41 +0000 (0:00:03.933) 0:04:31.537 ********** 2025-04-14 01:17:50.572164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.572173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.572225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572251 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.572270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.572280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.572295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572305 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.572315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.572375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.572398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572407 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.572416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.572430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572439 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.572449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.572458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572467 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.572530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.572554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572574 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.572583 | orchestrator | 2025-04-14 01:17:50.572592 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-04-14 01:17:50.572601 | orchestrator | Monday 14 April 2025 01:13:43 +0000 (0:00:02.027) 0:04:33.565 ********** 2025-04-14 01:17:50.572610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.572624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.572634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572643 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.572672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.572693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.572703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.572718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572727 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.572736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.572745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572754 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.572783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.572794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572815 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.572824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.572833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572842 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.572851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.572860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.572879 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.572888 | orchestrator | 2025-04-14 01:17:50.572896 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-14 01:17:50.572905 | orchestrator | Monday 14 April 2025 01:13:46 +0000 (0:00:02.960) 0:04:36.525 ********** 2025-04-14 01:17:50.572914 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.572923 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.572932 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.572940 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-04-14 01:17:50.572949 | orchestrator | 2025-04-14 01:17:50.572957 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-04-14 01:17:50.572966 | orchestrator | Monday 14 April 2025 01:13:47 +0000 (0:00:01.238) 0:04:37.763 ********** 2025-04-14 01:17:50.572994 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-14 01:17:50.573004 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-14 01:17:50.573013 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-14 01:17:50.573021 | orchestrator | 2025-04-14 01:17:50.573030 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-04-14 01:17:50.573038 | orchestrator | Monday 14 April 2025 01:13:48 +0000 (0:00:00.829) 0:04:38.593 ********** 2025-04-14 01:17:50.573052 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-14 01:17:50.573061 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-04-14 01:17:50.573070 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-04-14 01:17:50.573078 | orchestrator | 2025-04-14 01:17:50.573087 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-04-14 01:17:50.573096 | orchestrator | Monday 14 April 2025 01:13:49 +0000 (0:00:00.825) 0:04:39.419 ********** 2025-04-14 01:17:50.573104 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:17:50.573113 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:17:50.573122 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:17:50.573131 | orchestrator | 2025-04-14 01:17:50.573139 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-04-14 01:17:50.573148 | orchestrator | Monday 14 April 2025 01:13:50 +0000 (0:00:00.905) 0:04:40.324 ********** 2025-04-14 01:17:50.573157 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:17:50.573165 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:17:50.573174 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:17:50.573183 | orchestrator | 2025-04-14 01:17:50.573191 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-04-14 01:17:50.573200 | orchestrator | Monday 14 April 2025 01:13:50 +0000 (0:00:00.303) 0:04:40.628 ********** 2025-04-14 01:17:50.573209 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-14 01:17:50.573221 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-14 01:17:50.573230 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-14 01:17:50.573239 | orchestrator | 2025-04-14 01:17:50.573247 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-04-14 01:17:50.573256 | orchestrator | Monday 14 April 2025 01:13:51 +0000 (0:00:01.356) 0:04:41.984 ********** 2025-04-14 01:17:50.573265 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-14 01:17:50.573273 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-14 01:17:50.573282 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-14 01:17:50.573291 | orchestrator | 2025-04-14 01:17:50.573299 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-04-14 01:17:50.573308 | orchestrator | Monday 14 April 2025 01:13:52 +0000 (0:00:01.227) 0:04:43.212 ********** 2025-04-14 01:17:50.573317 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-04-14 01:17:50.573325 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-04-14 01:17:50.573334 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-04-14 01:17:50.573343 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-04-14 01:17:50.573354 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-04-14 01:17:50.573363 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-04-14 01:17:50.573372 | orchestrator | 2025-04-14 01:17:50.573381 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-04-14 01:17:50.573389 | orchestrator | Monday 14 April 2025 01:13:58 +0000 (0:00:05.323) 0:04:48.535 ********** 2025-04-14 01:17:50.573398 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.573407 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.573415 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.573424 | orchestrator | 2025-04-14 01:17:50.573432 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-04-14 01:17:50.573441 | orchestrator | Monday 14 April 2025 01:13:58 +0000 (0:00:00.299) 0:04:48.834 ********** 2025-04-14 01:17:50.573450 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.573459 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.573467 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.573476 | orchestrator | 2025-04-14 01:17:50.573485 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-04-14 01:17:50.573493 | orchestrator | Monday 14 April 2025 01:13:59 +0000 (0:00:00.481) 0:04:49.316 ********** 2025-04-14 01:17:50.573524 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.573534 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.573543 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.573551 | orchestrator | 2025-04-14 01:17:50.573560 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-04-14 01:17:50.573569 | orchestrator | Monday 14 April 2025 01:14:00 +0000 (0:00:01.530) 0:04:50.846 ********** 2025-04-14 01:17:50.573577 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-14 01:17:50.573587 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-14 01:17:50.573596 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-04-14 01:17:50.573605 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-14 01:17:50.573613 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-14 01:17:50.573622 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-04-14 01:17:50.573631 | orchestrator | 2025-04-14 01:17:50.573640 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-04-14 01:17:50.573669 | orchestrator | Monday 14 April 2025 01:14:04 +0000 (0:00:03.427) 0:04:54.273 ********** 2025-04-14 01:17:50.573680 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-14 01:17:50.573689 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-14 01:17:50.573698 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-14 01:17:50.573706 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-04-14 01:17:50.573715 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.573730 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-04-14 01:17:50.573739 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-04-14 01:17:50.573748 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.573757 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.573767 | orchestrator | 2025-04-14 01:17:50.573775 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-04-14 01:17:50.573784 | orchestrator | Monday 14 April 2025 01:14:07 +0000 (0:00:03.350) 0:04:57.624 ********** 2025-04-14 01:17:50.573792 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.573801 | orchestrator | 2025-04-14 01:17:50.573810 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-04-14 01:17:50.573819 | orchestrator | Monday 14 April 2025 01:14:07 +0000 (0:00:00.131) 0:04:57.755 ********** 2025-04-14 01:17:50.573827 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.573836 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.573845 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.573853 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.573861 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.573870 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.573878 | orchestrator | 2025-04-14 01:17:50.573887 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-04-14 01:17:50.573895 | orchestrator | Monday 14 April 2025 01:14:08 +0000 (0:00:00.953) 0:04:58.709 ********** 2025-04-14 01:17:50.573904 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-04-14 01:17:50.573913 | orchestrator | 2025-04-14 01:17:50.573921 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-04-14 01:17:50.573930 | orchestrator | Monday 14 April 2025 01:14:08 +0000 (0:00:00.380) 0:04:59.089 ********** 2025-04-14 01:17:50.573939 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.573952 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.573961 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.573969 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.573978 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.573986 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.573995 | orchestrator | 2025-04-14 01:17:50.574004 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-04-14 01:17:50.574012 | orchestrator | Monday 14 April 2025 01:14:09 +0000 (0:00:00.920) 0:05:00.010 ********** 2025-04-14 01:17:50.574054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.574065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.574098 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.574118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.574141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.574151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.574160 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.574252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.574309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.574352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.574423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.574450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.574560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.574798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574813 | orchestrator | 2025-04-14 01:17:50.574823 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-04-14 01:17:50.574831 | orchestrator | Monday 14 April 2025 01:14:13 +0000 (0:00:03.804) 0:05:03.814 ********** 2025-04-14 01:17:50.574840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.574849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.574858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.574912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.574928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.574937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.574947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.574964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.574973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.575027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.575037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.575046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.575055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.575064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.575116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.575126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.575135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.575144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.575164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.575195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.575233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.575246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.575335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.575363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.575383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.575399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.575498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.575541 | orchestrator | 2025-04-14 01:17:50.575550 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-04-14 01:17:50.575559 | orchestrator | Monday 14 April 2025 01:14:21 +0000 (0:00:07.825) 0:05:11.640 ********** 2025-04-14 01:17:50.575568 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.575577 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.575585 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.575594 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.575603 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.575611 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.575619 | orchestrator | 2025-04-14 01:17:50.575628 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-04-14 01:17:50.575637 | orchestrator | Monday 14 April 2025 01:14:23 +0000 (0:00:01.879) 0:05:13.519 ********** 2025-04-14 01:17:50.575645 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-14 01:17:50.575654 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-14 01:17:50.575666 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-04-14 01:17:50.575675 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-14 01:17:50.575684 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.575711 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-14 01:17:50.575722 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-14 01:17:50.575731 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.575759 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-04-14 01:17:50.575769 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.575778 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-14 01:17:50.575787 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-04-14 01:17:50.575796 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-14 01:17:50.575804 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-14 01:17:50.575813 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-04-14 01:17:50.575822 | orchestrator | 2025-04-14 01:17:50.575830 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-04-14 01:17:50.575839 | orchestrator | Monday 14 April 2025 01:14:28 +0000 (0:00:05.618) 0:05:19.137 ********** 2025-04-14 01:17:50.575847 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.575856 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.575865 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.575874 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.575882 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.575891 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.575900 | orchestrator | 2025-04-14 01:17:50.575908 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-04-14 01:17:50.575917 | orchestrator | Monday 14 April 2025 01:14:29 +0000 (0:00:00.938) 0:05:20.076 ********** 2025-04-14 01:17:50.575931 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-14 01:17:50.575940 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-14 01:17:50.575949 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-04-14 01:17:50.575958 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-14 01:17:50.575967 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-14 01:17:50.575975 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-14 01:17:50.575984 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-14 01:17:50.575992 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.576001 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-04-14 01:17:50.576010 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-14 01:17:50.576018 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-14 01:17:50.576027 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.576036 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-04-14 01:17:50.576044 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.576053 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-04-14 01:17:50.576062 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-14 01:17:50.576070 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-14 01:17:50.576082 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-04-14 01:17:50.576091 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-14 01:17:50.576099 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-14 01:17:50.576108 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-04-14 01:17:50.576117 | orchestrator | 2025-04-14 01:17:50.576126 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-04-14 01:17:50.576134 | orchestrator | Monday 14 April 2025 01:14:37 +0000 (0:00:07.805) 0:05:27.881 ********** 2025-04-14 01:17:50.576143 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-14 01:17:50.576152 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-14 01:17:50.576180 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-04-14 01:17:50.576190 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-14 01:17:50.576199 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-14 01:17:50.576207 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-04-14 01:17:50.576216 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-14 01:17:50.576225 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-14 01:17:50.576238 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-14 01:17:50.576247 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-04-14 01:17:50.576256 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-14 01:17:50.576265 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-04-14 01:17:50.576273 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-14 01:17:50.576282 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.576290 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-14 01:17:50.576299 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.576308 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-04-14 01:17:50.576316 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.576325 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-14 01:17:50.576334 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-14 01:17:50.576342 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-04-14 01:17:50.576351 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-14 01:17:50.576359 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-14 01:17:50.576368 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-04-14 01:17:50.576376 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-14 01:17:50.576385 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-14 01:17:50.576394 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-04-14 01:17:50.576402 | orchestrator | 2025-04-14 01:17:50.576411 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-04-14 01:17:50.576420 | orchestrator | Monday 14 April 2025 01:14:48 +0000 (0:00:10.883) 0:05:38.765 ********** 2025-04-14 01:17:50.576428 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.576437 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.576446 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.576454 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.576463 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.576471 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.576480 | orchestrator | 2025-04-14 01:17:50.576488 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-04-14 01:17:50.576497 | orchestrator | Monday 14 April 2025 01:14:49 +0000 (0:00:00.776) 0:05:39.542 ********** 2025-04-14 01:17:50.576546 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.576556 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.576565 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.576574 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.576582 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.576591 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.576600 | orchestrator | 2025-04-14 01:17:50.576608 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-04-14 01:17:50.576617 | orchestrator | Monday 14 April 2025 01:14:50 +0000 (0:00:01.020) 0:05:40.563 ********** 2025-04-14 01:17:50.576626 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.576638 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.576647 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.576656 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.576664 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.576673 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.576686 | orchestrator | 2025-04-14 01:17:50.576698 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-04-14 01:17:50.576707 | orchestrator | Monday 14 April 2025 01:14:53 +0000 (0:00:02.780) 0:05:43.344 ********** 2025-04-14 01:17:50.576747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.576759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.576768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.576778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.576787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.576796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.576817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.576846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.576857 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.576866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.576875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.576884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.576893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.576906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.576934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.576952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.576961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.576969 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.576977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.576986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.577005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577064 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.577073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.577093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.577102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577156 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.577175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.577184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.577192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577257 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.577265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.577277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.577286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577353 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.577362 | orchestrator | 2025-04-14 01:17:50.577370 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-04-14 01:17:50.577378 | orchestrator | Monday 14 April 2025 01:14:55 +0000 (0:00:02.565) 0:05:45.909 ********** 2025-04-14 01:17:50.577386 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-14 01:17:50.577394 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-14 01:17:50.577402 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.577410 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-14 01:17:50.577418 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-14 01:17:50.577426 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.577434 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-14 01:17:50.577442 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-14 01:17:50.577450 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.577458 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-14 01:17:50.577466 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-14 01:17:50.577474 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.577482 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-14 01:17:50.577490 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-14 01:17:50.577498 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.577521 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-14 01:17:50.577530 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-14 01:17:50.577539 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.577547 | orchestrator | 2025-04-14 01:17:50.577555 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-04-14 01:17:50.577563 | orchestrator | Monday 14 April 2025 01:14:56 +0000 (0:00:00.876) 0:05:46.785 ********** 2025-04-14 01:17:50.577575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.577583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.577603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.577621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.577633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-04-14 01:17:50.577641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-04-14 01:17:50.577659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-04-14 01:17:50.577898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-04-14 01:17:50.577912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577925 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.577979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.577995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.578004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.578012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.578067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.578076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.578085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-04-14 01:17:50.578103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.578112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.578120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-04-14 01:17:50.578128 | orchestrator | 2025-04-14 01:17:50.578136 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-04-14 01:17:50.578145 | orchestrator | Monday 14 April 2025 01:15:00 +0000 (0:00:03.865) 0:05:50.651 ********** 2025-04-14 01:17:50.578153 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.578161 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.578169 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.578177 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.578185 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.578193 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.578200 | orchestrator | 2025-04-14 01:17:50.578209 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-14 01:17:50.578217 | orchestrator | Monday 14 April 2025 01:15:01 +0000 (0:00:00.801) 0:05:51.452 ********** 2025-04-14 01:17:50.578225 | orchestrator | 2025-04-14 01:17:50.578233 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-14 01:17:50.578241 | orchestrator | Monday 14 April 2025 01:15:01 +0000 (0:00:00.314) 0:05:51.767 ********** 2025-04-14 01:17:50.578249 | orchestrator | 2025-04-14 01:17:50.578257 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-14 01:17:50.578265 | orchestrator | Monday 14 April 2025 01:15:01 +0000 (0:00:00.111) 0:05:51.878 ********** 2025-04-14 01:17:50.578273 | orchestrator | 2025-04-14 01:17:50.578281 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-14 01:17:50.578293 | orchestrator | Monday 14 April 2025 01:15:01 +0000 (0:00:00.317) 0:05:52.196 ********** 2025-04-14 01:17:50.578301 | orchestrator | 2025-04-14 01:17:50.578309 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-14 01:17:50.578316 | orchestrator | Monday 14 April 2025 01:15:02 +0000 (0:00:00.180) 0:05:52.376 ********** 2025-04-14 01:17:50.578324 | orchestrator | 2025-04-14 01:17:50.578332 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-04-14 01:17:50.578340 | orchestrator | Monday 14 April 2025 01:15:02 +0000 (0:00:00.307) 0:05:52.683 ********** 2025-04-14 01:17:50.578348 | orchestrator | 2025-04-14 01:17:50.578356 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-04-14 01:17:50.578364 | orchestrator | Monday 14 April 2025 01:15:02 +0000 (0:00:00.126) 0:05:52.809 ********** 2025-04-14 01:17:50.578372 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.578380 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:17:50.578388 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:17:50.578396 | orchestrator | 2025-04-14 01:17:50.578404 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-04-14 01:17:50.578412 | orchestrator | Monday 14 April 2025 01:15:15 +0000 (0:00:12.992) 0:06:05.802 ********** 2025-04-14 01:17:50.578420 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.578428 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:17:50.578436 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:17:50.578443 | orchestrator | 2025-04-14 01:17:50.578451 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-04-14 01:17:50.578460 | orchestrator | Monday 14 April 2025 01:15:31 +0000 (0:00:15.974) 0:06:21.777 ********** 2025-04-14 01:17:50.578471 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.578479 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.578487 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.578495 | orchestrator | 2025-04-14 01:17:50.578503 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-04-14 01:17:50.578548 | orchestrator | Monday 14 April 2025 01:15:53 +0000 (0:00:21.719) 0:06:43.496 ********** 2025-04-14 01:17:50.578556 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.578564 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.578572 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.578580 | orchestrator | 2025-04-14 01:17:50.578589 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-04-14 01:17:50.578597 | orchestrator | Monday 14 April 2025 01:16:18 +0000 (0:00:25.179) 0:07:08.676 ********** 2025-04-14 01:17:50.578605 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.578613 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.578621 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.578629 | orchestrator | 2025-04-14 01:17:50.578637 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-04-14 01:17:50.578645 | orchestrator | Monday 14 April 2025 01:16:19 +0000 (0:00:00.890) 0:07:09.566 ********** 2025-04-14 01:17:50.578653 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.578661 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.578670 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.578678 | orchestrator | 2025-04-14 01:17:50.578689 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-04-14 01:17:50.578697 | orchestrator | Monday 14 April 2025 01:16:20 +0000 (0:00:00.765) 0:07:10.331 ********** 2025-04-14 01:17:50.578705 | orchestrator | changed: [testbed-node-3] 2025-04-14 01:17:50.578713 | orchestrator | changed: [testbed-node-5] 2025-04-14 01:17:50.578721 | orchestrator | changed: [testbed-node-4] 2025-04-14 01:17:50.578729 | orchestrator | 2025-04-14 01:17:50.578737 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-04-14 01:17:50.578745 | orchestrator | Monday 14 April 2025 01:16:41 +0000 (0:00:21.300) 0:07:31.632 ********** 2025-04-14 01:17:50.578753 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.578760 | orchestrator | 2025-04-14 01:17:50.578770 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-04-14 01:17:50.578778 | orchestrator | Monday 14 April 2025 01:16:41 +0000 (0:00:00.144) 0:07:31.776 ********** 2025-04-14 01:17:50.578785 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.578791 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.578799 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.578806 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.578812 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.578820 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-04-14 01:17:50.578827 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-14 01:17:50.578834 | orchestrator | 2025-04-14 01:17:50.578841 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-04-14 01:17:50.578847 | orchestrator | Monday 14 April 2025 01:17:04 +0000 (0:00:22.487) 0:07:54.264 ********** 2025-04-14 01:17:50.578854 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.578864 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.578871 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.578878 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.578885 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.578892 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.578899 | orchestrator | 2025-04-14 01:17:50.578906 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-04-14 01:17:50.578913 | orchestrator | Monday 14 April 2025 01:17:14 +0000 (0:00:10.446) 0:08:04.710 ********** 2025-04-14 01:17:50.578920 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.578927 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.578934 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.578940 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.578947 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.578954 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-04-14 01:17:50.578962 | orchestrator | 2025-04-14 01:17:50.578969 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-04-14 01:17:50.578976 | orchestrator | Monday 14 April 2025 01:17:17 +0000 (0:00:03.189) 0:08:07.900 ********** 2025-04-14 01:17:50.578983 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-14 01:17:50.578990 | orchestrator | 2025-04-14 01:17:50.578996 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-04-14 01:17:50.579003 | orchestrator | Monday 14 April 2025 01:17:28 +0000 (0:00:10.500) 0:08:18.401 ********** 2025-04-14 01:17:50.579010 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-14 01:17:50.579017 | orchestrator | 2025-04-14 01:17:50.579024 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-04-14 01:17:50.579031 | orchestrator | Monday 14 April 2025 01:17:29 +0000 (0:00:01.226) 0:08:19.627 ********** 2025-04-14 01:17:50.579038 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.579045 | orchestrator | 2025-04-14 01:17:50.579052 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-04-14 01:17:50.579059 | orchestrator | Monday 14 April 2025 01:17:30 +0000 (0:00:01.502) 0:08:21.130 ********** 2025-04-14 01:17:50.579066 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-04-14 01:17:50.579073 | orchestrator | 2025-04-14 01:17:50.579080 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-04-14 01:17:50.579087 | orchestrator | Monday 14 April 2025 01:17:40 +0000 (0:00:09.474) 0:08:30.605 ********** 2025-04-14 01:17:50.579094 | orchestrator | ok: [testbed-node-3] 2025-04-14 01:17:50.579101 | orchestrator | ok: [testbed-node-4] 2025-04-14 01:17:50.579108 | orchestrator | ok: [testbed-node-5] 2025-04-14 01:17:50.579115 | orchestrator | ok: [testbed-node-0] 2025-04-14 01:17:50.579122 | orchestrator | ok: [testbed-node-1] 2025-04-14 01:17:50.579132 | orchestrator | ok: [testbed-node-2] 2025-04-14 01:17:50.579140 | orchestrator | 2025-04-14 01:17:50.579150 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-04-14 01:17:50.579157 | orchestrator | 2025-04-14 01:17:50.579164 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-04-14 01:17:50.579171 | orchestrator | Monday 14 April 2025 01:17:42 +0000 (0:00:02.150) 0:08:32.755 ********** 2025-04-14 01:17:50.579178 | orchestrator | changed: [testbed-node-0] 2025-04-14 01:17:50.579185 | orchestrator | changed: [testbed-node-1] 2025-04-14 01:17:50.579192 | orchestrator | changed: [testbed-node-2] 2025-04-14 01:17:50.579199 | orchestrator | 2025-04-14 01:17:50.579206 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-04-14 01:17:50.579213 | orchestrator | 2025-04-14 01:17:50.579220 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-04-14 01:17:50.579227 | orchestrator | Monday 14 April 2025 01:17:43 +0000 (0:00:01.174) 0:08:33.930 ********** 2025-04-14 01:17:50.579234 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.579241 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.579248 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.579255 | orchestrator | 2025-04-14 01:17:50.579262 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-04-14 01:17:50.579269 | orchestrator | 2025-04-14 01:17:50.579276 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-04-14 01:17:50.579283 | orchestrator | Monday 14 April 2025 01:17:44 +0000 (0:00:00.933) 0:08:34.863 ********** 2025-04-14 01:17:50.579290 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-04-14 01:17:50.579297 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-04-14 01:17:50.579304 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-04-14 01:17:50.579311 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-04-14 01:17:50.579318 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-04-14 01:17:50.579325 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-04-14 01:17:50.579332 | orchestrator | skipping: [testbed-node-3] 2025-04-14 01:17:50.579339 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-04-14 01:17:50.579346 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-04-14 01:17:50.579353 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-04-14 01:17:50.579360 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-04-14 01:17:50.579367 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-04-14 01:17:50.579374 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-04-14 01:17:50.579380 | orchestrator | skipping: [testbed-node-4] 2025-04-14 01:17:50.579388 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-04-14 01:17:50.579395 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-04-14 01:17:50.579401 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-04-14 01:17:50.579411 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-04-14 01:17:50.579418 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-04-14 01:17:50.579425 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-04-14 01:17:50.579432 | orchestrator | skipping: [testbed-node-5] 2025-04-14 01:17:50.579439 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-04-14 01:17:50.579446 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-04-14 01:17:50.579453 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-04-14 01:17:50.579460 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-04-14 01:17:50.579467 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-04-14 01:17:50.579474 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-04-14 01:17:50.579487 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-04-14 01:17:50.579494 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-04-14 01:17:50.579501 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-04-14 01:17:50.579519 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-04-14 01:17:50.579526 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-04-14 01:17:50.579533 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-04-14 01:17:50.579540 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.579547 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.579554 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-04-14 01:17:50.579561 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-04-14 01:17:50.579568 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-04-14 01:17:50.579575 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-04-14 01:17:50.579582 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-04-14 01:17:50.579589 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-04-14 01:17:50.579596 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:50.579603 | orchestrator | 2025-04-14 01:17:50.579610 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-04-14 01:17:50.579617 | orchestrator | 2025-04-14 01:17:50.579625 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-04-14 01:17:50.579635 | orchestrator | Monday 14 April 2025 01:17:46 +0000 (0:00:01.520) 0:08:36.384 ********** 2025-04-14 01:17:50.579642 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-04-14 01:17:50.579649 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-04-14 01:17:50.579656 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:50.579663 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-04-14 01:17:50.579670 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-04-14 01:17:50.579677 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:50.579687 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-04-14 01:17:53.611471 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-04-14 01:17:53.611652 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:53.611675 | orchestrator | 2025-04-14 01:17:53.611691 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-04-14 01:17:53.611706 | orchestrator | 2025-04-14 01:17:53.611720 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-04-14 01:17:53.611735 | orchestrator | Monday 14 April 2025 01:17:46 +0000 (0:00:00.659) 0:08:37.043 ********** 2025-04-14 01:17:53.611749 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:53.611763 | orchestrator | 2025-04-14 01:17:53.611777 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-04-14 01:17:53.611791 | orchestrator | 2025-04-14 01:17:53.611805 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-04-14 01:17:53.611819 | orchestrator | Monday 14 April 2025 01:17:47 +0000 (0:00:00.973) 0:08:38.016 ********** 2025-04-14 01:17:53.611833 | orchestrator | skipping: [testbed-node-0] 2025-04-14 01:17:53.611847 | orchestrator | skipping: [testbed-node-1] 2025-04-14 01:17:53.611861 | orchestrator | skipping: [testbed-node-2] 2025-04-14 01:17:53.611875 | orchestrator | 2025-04-14 01:17:53.611889 | orchestrator | PLAY RECAP ********************************************************************* 2025-04-14 01:17:53.611902 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-04-14 01:17:53.611919 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-04-14 01:17:53.611933 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-14 01:17:53.611974 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-04-14 01:17:53.611990 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-04-14 01:17:53.612004 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-04-14 01:17:53.612021 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-04-14 01:17:53.612036 | orchestrator | 2025-04-14 01:17:53.612051 | orchestrator | 2025-04-14 01:17:53.612067 | orchestrator | TASKS RECAP ******************************************************************** 2025-04-14 01:17:53.612083 | orchestrator | Monday 14 April 2025 01:17:48 +0000 (0:00:00.565) 0:08:38.582 ********** 2025-04-14 01:17:53.612099 | orchestrator | =============================================================================== 2025-04-14 01:17:53.612115 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.72s 2025-04-14 01:17:53.612131 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 25.18s 2025-04-14 01:17:53.612147 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.49s 2025-04-14 01:17:53.612162 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.72s 2025-04-14 01:17:53.612177 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.30s 2025-04-14 01:17:53.612192 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.88s 2025-04-14 01:17:53.612207 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.50s 2025-04-14 01:17:53.612223 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.32s 2025-04-14 01:17:53.612237 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.97s 2025-04-14 01:17:53.612252 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.69s 2025-04-14 01:17:53.612268 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.99s 2025-04-14 01:17:53.612283 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.33s 2025-04-14 01:17:53.612298 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.28s 2025-04-14 01:17:53.612313 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.88s 2025-04-14 01:17:53.612329 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.50s 2025-04-14 01:17:53.612344 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.45s 2025-04-14 01:17:53.612360 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.27s 2025-04-14 01:17:53.612373 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.25s 2025-04-14 01:17:53.612387 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.47s 2025-04-14 01:17:53.612401 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.91s 2025-04-14 01:17:53.612415 | orchestrator | 2025-04-14 01:17:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:53.612446 | orchestrator | 2025-04-14 01:17:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:56.660949 | orchestrator | 2025-04-14 01:17:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:56.661100 | orchestrator | 2025-04-14 01:17:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:17:59.705196 | orchestrator | 2025-04-14 01:17:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:17:59.705391 | orchestrator | 2025-04-14 01:17:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:02.752449 | orchestrator | 2025-04-14 01:17:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:02.752630 | orchestrator | 2025-04-14 01:18:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:05.808612 | orchestrator | 2025-04-14 01:18:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:05.808753 | orchestrator | 2025-04-14 01:18:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:08.858177 | orchestrator | 2025-04-14 01:18:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:08.858322 | orchestrator | 2025-04-14 01:18:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:11.902609 | orchestrator | 2025-04-14 01:18:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:11.902750 | orchestrator | 2025-04-14 01:18:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:14.960065 | orchestrator | 2025-04-14 01:18:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:14.960209 | orchestrator | 2025-04-14 01:18:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:18.012511 | orchestrator | 2025-04-14 01:18:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:18.012679 | orchestrator | 2025-04-14 01:18:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:21.063718 | orchestrator | 2025-04-14 01:18:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:21.063863 | orchestrator | 2025-04-14 01:18:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:24.106954 | orchestrator | 2025-04-14 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:24.107090 | orchestrator | 2025-04-14 01:18:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:27.155000 | orchestrator | 2025-04-14 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:27.155153 | orchestrator | 2025-04-14 01:18:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:30.200013 | orchestrator | 2025-04-14 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:30.200194 | orchestrator | 2025-04-14 01:18:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:33.255453 | orchestrator | 2025-04-14 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:33.255657 | orchestrator | 2025-04-14 01:18:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:36.303413 | orchestrator | 2025-04-14 01:18:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:36.303551 | orchestrator | 2025-04-14 01:18:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:39.358095 | orchestrator | 2025-04-14 01:18:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:39.358242 | orchestrator | 2025-04-14 01:18:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:42.411230 | orchestrator | 2025-04-14 01:18:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:42.411369 | orchestrator | 2025-04-14 01:18:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:45.460603 | orchestrator | 2025-04-14 01:18:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:45.460719 | orchestrator | 2025-04-14 01:18:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:48.509532 | orchestrator | 2025-04-14 01:18:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:48.509742 | orchestrator | 2025-04-14 01:18:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:51.564171 | orchestrator | 2025-04-14 01:18:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:51.564301 | orchestrator | 2025-04-14 01:18:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:54.607410 | orchestrator | 2025-04-14 01:18:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:54.607531 | orchestrator | 2025-04-14 01:18:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:18:57.655775 | orchestrator | 2025-04-14 01:18:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:18:57.655936 | orchestrator | 2025-04-14 01:18:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:00.703658 | orchestrator | 2025-04-14 01:18:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:00.703825 | orchestrator | 2025-04-14 01:19:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:03.750497 | orchestrator | 2025-04-14 01:19:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:03.750667 | orchestrator | 2025-04-14 01:19:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:06.798567 | orchestrator | 2025-04-14 01:19:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:06.798735 | orchestrator | 2025-04-14 01:19:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:09.841110 | orchestrator | 2025-04-14 01:19:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:09.841258 | orchestrator | 2025-04-14 01:19:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:12.893531 | orchestrator | 2025-04-14 01:19:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:12.893705 | orchestrator | 2025-04-14 01:19:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:15.939940 | orchestrator | 2025-04-14 01:19:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:15.940057 | orchestrator | 2025-04-14 01:19:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:18.991835 | orchestrator | 2025-04-14 01:19:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:18.991987 | orchestrator | 2025-04-14 01:19:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:22.042210 | orchestrator | 2025-04-14 01:19:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:22.042354 | orchestrator | 2025-04-14 01:19:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:25.088517 | orchestrator | 2025-04-14 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:25.088679 | orchestrator | 2025-04-14 01:19:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:28.137271 | orchestrator | 2025-04-14 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:28.137442 | orchestrator | 2025-04-14 01:19:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:31.186986 | orchestrator | 2025-04-14 01:19:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:31.187148 | orchestrator | 2025-04-14 01:19:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:34.228598 | orchestrator | 2025-04-14 01:19:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:34.228796 | orchestrator | 2025-04-14 01:19:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:37.274977 | orchestrator | 2025-04-14 01:19:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:37.275119 | orchestrator | 2025-04-14 01:19:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:40.319909 | orchestrator | 2025-04-14 01:19:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:40.320098 | orchestrator | 2025-04-14 01:19:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:43.376879 | orchestrator | 2025-04-14 01:19:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:43.377016 | orchestrator | 2025-04-14 01:19:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:46.433409 | orchestrator | 2025-04-14 01:19:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:46.433711 | orchestrator | 2025-04-14 01:19:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:49.475545 | orchestrator | 2025-04-14 01:19:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:49.475748 | orchestrator | 2025-04-14 01:19:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:52.523479 | orchestrator | 2025-04-14 01:19:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:52.523626 | orchestrator | 2025-04-14 01:19:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:55.570623 | orchestrator | 2025-04-14 01:19:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:55.570807 | orchestrator | 2025-04-14 01:19:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:19:58.617723 | orchestrator | 2025-04-14 01:19:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:19:58.617842 | orchestrator | 2025-04-14 01:19:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:01.665409 | orchestrator | 2025-04-14 01:19:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:01.665543 | orchestrator | 2025-04-14 01:20:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:04.713685 | orchestrator | 2025-04-14 01:20:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:04.713870 | orchestrator | 2025-04-14 01:20:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:07.763694 | orchestrator | 2025-04-14 01:20:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:07.763888 | orchestrator | 2025-04-14 01:20:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:10.813502 | orchestrator | 2025-04-14 01:20:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:10.813640 | orchestrator | 2025-04-14 01:20:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:13.874348 | orchestrator | 2025-04-14 01:20:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:13.874487 | orchestrator | 2025-04-14 01:20:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:16.912869 | orchestrator | 2025-04-14 01:20:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:16.913061 | orchestrator | 2025-04-14 01:20:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:19.959451 | orchestrator | 2025-04-14 01:20:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:19.959594 | orchestrator | 2025-04-14 01:20:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:23.006636 | orchestrator | 2025-04-14 01:20:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:23.006832 | orchestrator | 2025-04-14 01:20:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:26.057664 | orchestrator | 2025-04-14 01:20:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:26.057839 | orchestrator | 2025-04-14 01:20:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:29.109953 | orchestrator | 2025-04-14 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:29.110133 | orchestrator | 2025-04-14 01:20:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:32.155859 | orchestrator | 2025-04-14 01:20:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:32.155999 | orchestrator | 2025-04-14 01:20:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:35.201869 | orchestrator | 2025-04-14 01:20:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:35.202086 | orchestrator | 2025-04-14 01:20:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:38.249053 | orchestrator | 2025-04-14 01:20:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:38.249198 | orchestrator | 2025-04-14 01:20:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:41.301303 | orchestrator | 2025-04-14 01:20:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:41.301444 | orchestrator | 2025-04-14 01:20:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:44.349192 | orchestrator | 2025-04-14 01:20:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:44.349327 | orchestrator | 2025-04-14 01:20:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:47.403223 | orchestrator | 2025-04-14 01:20:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:47.403367 | orchestrator | 2025-04-14 01:20:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:50.461256 | orchestrator | 2025-04-14 01:20:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:50.461400 | orchestrator | 2025-04-14 01:20:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:53.507608 | orchestrator | 2025-04-14 01:20:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:53.507755 | orchestrator | 2025-04-14 01:20:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:56.551170 | orchestrator | 2025-04-14 01:20:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:56.551306 | orchestrator | 2025-04-14 01:20:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:20:59.596699 | orchestrator | 2025-04-14 01:20:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:20:59.596903 | orchestrator | 2025-04-14 01:20:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:02.646453 | orchestrator | 2025-04-14 01:20:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:02.646623 | orchestrator | 2025-04-14 01:21:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:05.691686 | orchestrator | 2025-04-14 01:21:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:05.691869 | orchestrator | 2025-04-14 01:21:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:08.740697 | orchestrator | 2025-04-14 01:21:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:08.740904 | orchestrator | 2025-04-14 01:21:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:11.787297 | orchestrator | 2025-04-14 01:21:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:11.787444 | orchestrator | 2025-04-14 01:21:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:14.838541 | orchestrator | 2025-04-14 01:21:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:14.838679 | orchestrator | 2025-04-14 01:21:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:17.888271 | orchestrator | 2025-04-14 01:21:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:17.888417 | orchestrator | 2025-04-14 01:21:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:20.940545 | orchestrator | 2025-04-14 01:21:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:20.940688 | orchestrator | 2025-04-14 01:21:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:23.982684 | orchestrator | 2025-04-14 01:21:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:23.982882 | orchestrator | 2025-04-14 01:21:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:27.032988 | orchestrator | 2025-04-14 01:21:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:27.033130 | orchestrator | 2025-04-14 01:21:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:30.089979 | orchestrator | 2025-04-14 01:21:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:30.090230 | orchestrator | 2025-04-14 01:21:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:33.138119 | orchestrator | 2025-04-14 01:21:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:33.138323 | orchestrator | 2025-04-14 01:21:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:36.193289 | orchestrator | 2025-04-14 01:21:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:36.193430 | orchestrator | 2025-04-14 01:21:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:39.241897 | orchestrator | 2025-04-14 01:21:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:39.242132 | orchestrator | 2025-04-14 01:21:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:42.286077 | orchestrator | 2025-04-14 01:21:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:42.286226 | orchestrator | 2025-04-14 01:21:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:45.346512 | orchestrator | 2025-04-14 01:21:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:45.346641 | orchestrator | 2025-04-14 01:21:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:48.402952 | orchestrator | 2025-04-14 01:21:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:48.403099 | orchestrator | 2025-04-14 01:21:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:51.456649 | orchestrator | 2025-04-14 01:21:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:51.456811 | orchestrator | 2025-04-14 01:21:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:54.507048 | orchestrator | 2025-04-14 01:21:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:54.507219 | orchestrator | 2025-04-14 01:21:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:21:57.559217 | orchestrator | 2025-04-14 01:21:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:21:57.559358 | orchestrator | 2025-04-14 01:21:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:00.613070 | orchestrator | 2025-04-14 01:21:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:00.613214 | orchestrator | 2025-04-14 01:22:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:03.664975 | orchestrator | 2025-04-14 01:22:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:03.665117 | orchestrator | 2025-04-14 01:22:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:06.711409 | orchestrator | 2025-04-14 01:22:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:06.711545 | orchestrator | 2025-04-14 01:22:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:09.759470 | orchestrator | 2025-04-14 01:22:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:09.759601 | orchestrator | 2025-04-14 01:22:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:12.809657 | orchestrator | 2025-04-14 01:22:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:12.809787 | orchestrator | 2025-04-14 01:22:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:15.853478 | orchestrator | 2025-04-14 01:22:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:15.853623 | orchestrator | 2025-04-14 01:22:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:18.906380 | orchestrator | 2025-04-14 01:22:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:18.906524 | orchestrator | 2025-04-14 01:22:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:21.951803 | orchestrator | 2025-04-14 01:22:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:21.951990 | orchestrator | 2025-04-14 01:22:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:25.008559 | orchestrator | 2025-04-14 01:22:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:25.008687 | orchestrator | 2025-04-14 01:22:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:28.056883 | orchestrator | 2025-04-14 01:22:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:28.057111 | orchestrator | 2025-04-14 01:22:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:31.105297 | orchestrator | 2025-04-14 01:22:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:31.105424 | orchestrator | 2025-04-14 01:22:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:34.159687 | orchestrator | 2025-04-14 01:22:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:34.159817 | orchestrator | 2025-04-14 01:22:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:37.207035 | orchestrator | 2025-04-14 01:22:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:37.207176 | orchestrator | 2025-04-14 01:22:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:40.265638 | orchestrator | 2025-04-14 01:22:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:40.265776 | orchestrator | 2025-04-14 01:22:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:43.316221 | orchestrator | 2025-04-14 01:22:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:43.316365 | orchestrator | 2025-04-14 01:22:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:46.375772 | orchestrator | 2025-04-14 01:22:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:46.375896 | orchestrator | 2025-04-14 01:22:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:49.425412 | orchestrator | 2025-04-14 01:22:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:49.425572 | orchestrator | 2025-04-14 01:22:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:52.473497 | orchestrator | 2025-04-14 01:22:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:52.473644 | orchestrator | 2025-04-14 01:22:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:55.524253 | orchestrator | 2025-04-14 01:22:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:55.524408 | orchestrator | 2025-04-14 01:22:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:22:58.572445 | orchestrator | 2025-04-14 01:22:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:22:58.572581 | orchestrator | 2025-04-14 01:22:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:01.613273 | orchestrator | 2025-04-14 01:22:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:01.613421 | orchestrator | 2025-04-14 01:23:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:04.660416 | orchestrator | 2025-04-14 01:23:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:04.660556 | orchestrator | 2025-04-14 01:23:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:07.705029 | orchestrator | 2025-04-14 01:23:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:07.705155 | orchestrator | 2025-04-14 01:23:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:10.761472 | orchestrator | 2025-04-14 01:23:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:10.761585 | orchestrator | 2025-04-14 01:23:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:13.811325 | orchestrator | 2025-04-14 01:23:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:13.811472 | orchestrator | 2025-04-14 01:23:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:16.861512 | orchestrator | 2025-04-14 01:23:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:16.861637 | orchestrator | 2025-04-14 01:23:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:19.914868 | orchestrator | 2025-04-14 01:23:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:19.915038 | orchestrator | 2025-04-14 01:23:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:22.960289 | orchestrator | 2025-04-14 01:23:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:22.960421 | orchestrator | 2025-04-14 01:23:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:26.014419 | orchestrator | 2025-04-14 01:23:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:26.014573 | orchestrator | 2025-04-14 01:23:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:29.059590 | orchestrator | 2025-04-14 01:23:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:29.059730 | orchestrator | 2025-04-14 01:23:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:32.109997 | orchestrator | 2025-04-14 01:23:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:32.110244 | orchestrator | 2025-04-14 01:23:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:35.158305 | orchestrator | 2025-04-14 01:23:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:35.158550 | orchestrator | 2025-04-14 01:23:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:38.206285 | orchestrator | 2025-04-14 01:23:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:38.206384 | orchestrator | 2025-04-14 01:23:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:41.246100 | orchestrator | 2025-04-14 01:23:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:41.246243 | orchestrator | 2025-04-14 01:23:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:44.291408 | orchestrator | 2025-04-14 01:23:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:44.291560 | orchestrator | 2025-04-14 01:23:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:47.336678 | orchestrator | 2025-04-14 01:23:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:47.336818 | orchestrator | 2025-04-14 01:23:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:50.384810 | orchestrator | 2025-04-14 01:23:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:50.384945 | orchestrator | 2025-04-14 01:23:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:53.435010 | orchestrator | 2025-04-14 01:23:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:53.435204 | orchestrator | 2025-04-14 01:23:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:56.486599 | orchestrator | 2025-04-14 01:23:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:56.486752 | orchestrator | 2025-04-14 01:23:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:23:59.539603 | orchestrator | 2025-04-14 01:23:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:23:59.539730 | orchestrator | 2025-04-14 01:23:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:02.590489 | orchestrator | 2025-04-14 01:23:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:02.590625 | orchestrator | 2025-04-14 01:24:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:05.642581 | orchestrator | 2025-04-14 01:24:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:05.642752 | orchestrator | 2025-04-14 01:24:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:08.690267 | orchestrator | 2025-04-14 01:24:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:08.690448 | orchestrator | 2025-04-14 01:24:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:11.737564 | orchestrator | 2025-04-14 01:24:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:11.737770 | orchestrator | 2025-04-14 01:24:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:14.780132 | orchestrator | 2025-04-14 01:24:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:14.780313 | orchestrator | 2025-04-14 01:24:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:17.828264 | orchestrator | 2025-04-14 01:24:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:17.828404 | orchestrator | 2025-04-14 01:24:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:20.883586 | orchestrator | 2025-04-14 01:24:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:20.883734 | orchestrator | 2025-04-14 01:24:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:23.937272 | orchestrator | 2025-04-14 01:24:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:23.937412 | orchestrator | 2025-04-14 01:24:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:26.993562 | orchestrator | 2025-04-14 01:24:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:26.993711 | orchestrator | 2025-04-14 01:24:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:30.047980 | orchestrator | 2025-04-14 01:24:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:30.048104 | orchestrator | 2025-04-14 01:24:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:33.101348 | orchestrator | 2025-04-14 01:24:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:33.101489 | orchestrator | 2025-04-14 01:24:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:36.146822 | orchestrator | 2025-04-14 01:24:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:36.146967 | orchestrator | 2025-04-14 01:24:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:39.201508 | orchestrator | 2025-04-14 01:24:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:39.201676 | orchestrator | 2025-04-14 01:24:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:42.252580 | orchestrator | 2025-04-14 01:24:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:42.252709 | orchestrator | 2025-04-14 01:24:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:45.308409 | orchestrator | 2025-04-14 01:24:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:45.308556 | orchestrator | 2025-04-14 01:24:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:48.364368 | orchestrator | 2025-04-14 01:24:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:48.364503 | orchestrator | 2025-04-14 01:24:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:51.409584 | orchestrator | 2025-04-14 01:24:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:51.409725 | orchestrator | 2025-04-14 01:24:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:54.455516 | orchestrator | 2025-04-14 01:24:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:54.455657 | orchestrator | 2025-04-14 01:24:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:54.456629 | orchestrator | 2025-04-14 01:24:54 | INFO  | Task 7907f9a4-0011-4dc4-8939-aa208668212d is in state STARTED 2025-04-14 01:24:57.508066 | orchestrator | 2025-04-14 01:24:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:24:57.508206 | orchestrator | 2025-04-14 01:24:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:24:57.510514 | orchestrator | 2025-04-14 01:24:57 | INFO  | Task 7907f9a4-0011-4dc4-8939-aa208668212d is in state STARTED 2025-04-14 01:25:00.564200 | orchestrator | 2025-04-14 01:24:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:00.564405 | orchestrator | 2025-04-14 01:25:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:00.564829 | orchestrator | 2025-04-14 01:25:00 | INFO  | Task 7907f9a4-0011-4dc4-8939-aa208668212d is in state STARTED 2025-04-14 01:25:03.621832 | orchestrator | 2025-04-14 01:25:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:03.621959 | orchestrator | 2025-04-14 01:25:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:03.623071 | orchestrator | 2025-04-14 01:25:03 | INFO  | Task 7907f9a4-0011-4dc4-8939-aa208668212d is in state STARTED 2025-04-14 01:25:06.677817 | orchestrator | 2025-04-14 01:25:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:06.677957 | orchestrator | 2025-04-14 01:25:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:06.678648 | orchestrator | 2025-04-14 01:25:06 | INFO  | Task 7907f9a4-0011-4dc4-8939-aa208668212d is in state SUCCESS 2025-04-14 01:25:09.731686 | orchestrator | 2025-04-14 01:25:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:09.731839 | orchestrator | 2025-04-14 01:25:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:12.777462 | orchestrator | 2025-04-14 01:25:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:12.777642 | orchestrator | 2025-04-14 01:25:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:15.829452 | orchestrator | 2025-04-14 01:25:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:15.829545 | orchestrator | 2025-04-14 01:25:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:18.880395 | orchestrator | 2025-04-14 01:25:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:18.880536 | orchestrator | 2025-04-14 01:25:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:21.934278 | orchestrator | 2025-04-14 01:25:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:21.934475 | orchestrator | 2025-04-14 01:25:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:24.983743 | orchestrator | 2025-04-14 01:25:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:24.983877 | orchestrator | 2025-04-14 01:25:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:28.040609 | orchestrator | 2025-04-14 01:25:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:28.040759 | orchestrator | 2025-04-14 01:25:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:31.096218 | orchestrator | 2025-04-14 01:25:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:31.096405 | orchestrator | 2025-04-14 01:25:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:34.146882 | orchestrator | 2025-04-14 01:25:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:34.147053 | orchestrator | 2025-04-14 01:25:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:37.194114 | orchestrator | 2025-04-14 01:25:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:37.194244 | orchestrator | 2025-04-14 01:25:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:40.249592 | orchestrator | 2025-04-14 01:25:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:40.249738 | orchestrator | 2025-04-14 01:25:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:43.298199 | orchestrator | 2025-04-14 01:25:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:43.298385 | orchestrator | 2025-04-14 01:25:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:46.345116 | orchestrator | 2025-04-14 01:25:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:46.345260 | orchestrator | 2025-04-14 01:25:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:49.397395 | orchestrator | 2025-04-14 01:25:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:49.397537 | orchestrator | 2025-04-14 01:25:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:52.453246 | orchestrator | 2025-04-14 01:25:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:52.453418 | orchestrator | 2025-04-14 01:25:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:55.501653 | orchestrator | 2025-04-14 01:25:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:55.501789 | orchestrator | 2025-04-14 01:25:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:25:58.555848 | orchestrator | 2025-04-14 01:25:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:25:58.555994 | orchestrator | 2025-04-14 01:25:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:01.599277 | orchestrator | 2025-04-14 01:25:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:01.599427 | orchestrator | 2025-04-14 01:26:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:04.649629 | orchestrator | 2025-04-14 01:26:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:04.649778 | orchestrator | 2025-04-14 01:26:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:07.708867 | orchestrator | 2025-04-14 01:26:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:07.709016 | orchestrator | 2025-04-14 01:26:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:10.755438 | orchestrator | 2025-04-14 01:26:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:10.755576 | orchestrator | 2025-04-14 01:26:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:13.804833 | orchestrator | 2025-04-14 01:26:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:13.804970 | orchestrator | 2025-04-14 01:26:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:16.862541 | orchestrator | 2025-04-14 01:26:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:16.862685 | orchestrator | 2025-04-14 01:26:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:19.908526 | orchestrator | 2025-04-14 01:26:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:19.908668 | orchestrator | 2025-04-14 01:26:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:22.959944 | orchestrator | 2025-04-14 01:26:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:22.960083 | orchestrator | 2025-04-14 01:26:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:26.018947 | orchestrator | 2025-04-14 01:26:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:26.019078 | orchestrator | 2025-04-14 01:26:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:29.059030 | orchestrator | 2025-04-14 01:26:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:29.059194 | orchestrator | 2025-04-14 01:26:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:32.112215 | orchestrator | 2025-04-14 01:26:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:32.112356 | orchestrator | 2025-04-14 01:26:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:35.168793 | orchestrator | 2025-04-14 01:26:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:35.168940 | orchestrator | 2025-04-14 01:26:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:38.223198 | orchestrator | 2025-04-14 01:26:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:38.223342 | orchestrator | 2025-04-14 01:26:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:41.264361 | orchestrator | 2025-04-14 01:26:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:41.264588 | orchestrator | 2025-04-14 01:26:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:44.316243 | orchestrator | 2025-04-14 01:26:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:44.316382 | orchestrator | 2025-04-14 01:26:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:44.316919 | orchestrator | 2025-04-14 01:26:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:47.364061 | orchestrator | 2025-04-14 01:26:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:50.415289 | orchestrator | 2025-04-14 01:26:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:50.415492 | orchestrator | 2025-04-14 01:26:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:53.468266 | orchestrator | 2025-04-14 01:26:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:53.468408 | orchestrator | 2025-04-14 01:26:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:56.517696 | orchestrator | 2025-04-14 01:26:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:56.517834 | orchestrator | 2025-04-14 01:26:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:26:59.571078 | orchestrator | 2025-04-14 01:26:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:26:59.571206 | orchestrator | 2025-04-14 01:26:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:02.624712 | orchestrator | 2025-04-14 01:26:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:02.624892 | orchestrator | 2025-04-14 01:27:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:05.672383 | orchestrator | 2025-04-14 01:27:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:05.672570 | orchestrator | 2025-04-14 01:27:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:08.719947 | orchestrator | 2025-04-14 01:27:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:08.720110 | orchestrator | 2025-04-14 01:27:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:11.778509 | orchestrator | 2025-04-14 01:27:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:11.778678 | orchestrator | 2025-04-14 01:27:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:14.829583 | orchestrator | 2025-04-14 01:27:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:14.829677 | orchestrator | 2025-04-14 01:27:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:17.872875 | orchestrator | 2025-04-14 01:27:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:17.873005 | orchestrator | 2025-04-14 01:27:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:20.929779 | orchestrator | 2025-04-14 01:27:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:20.929927 | orchestrator | 2025-04-14 01:27:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:23.982451 | orchestrator | 2025-04-14 01:27:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:23.982619 | orchestrator | 2025-04-14 01:27:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:27.037310 | orchestrator | 2025-04-14 01:27:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:27.037394 | orchestrator | 2025-04-14 01:27:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:30.088124 | orchestrator | 2025-04-14 01:27:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:30.088273 | orchestrator | 2025-04-14 01:27:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:33.141754 | orchestrator | 2025-04-14 01:27:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:33.141897 | orchestrator | 2025-04-14 01:27:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:36.190966 | orchestrator | 2025-04-14 01:27:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:36.191109 | orchestrator | 2025-04-14 01:27:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:39.236288 | orchestrator | 2025-04-14 01:27:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:39.236452 | orchestrator | 2025-04-14 01:27:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:42.286978 | orchestrator | 2025-04-14 01:27:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:42.287176 | orchestrator | 2025-04-14 01:27:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:45.329794 | orchestrator | 2025-04-14 01:27:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:45.329925 | orchestrator | 2025-04-14 01:27:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:48.376326 | orchestrator | 2025-04-14 01:27:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:48.376563 | orchestrator | 2025-04-14 01:27:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:51.419643 | orchestrator | 2025-04-14 01:27:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:51.419738 | orchestrator | 2025-04-14 01:27:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:54.472565 | orchestrator | 2025-04-14 01:27:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:54.472708 | orchestrator | 2025-04-14 01:27:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:27:57.531038 | orchestrator | 2025-04-14 01:27:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:27:57.531143 | orchestrator | 2025-04-14 01:27:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:00.574790 | orchestrator | 2025-04-14 01:27:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:00.574937 | orchestrator | 2025-04-14 01:28:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:03.618704 | orchestrator | 2025-04-14 01:28:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:03.618843 | orchestrator | 2025-04-14 01:28:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:06.673050 | orchestrator | 2025-04-14 01:28:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:06.673193 | orchestrator | 2025-04-14 01:28:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:09.726379 | orchestrator | 2025-04-14 01:28:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:09.726624 | orchestrator | 2025-04-14 01:28:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:12.778939 | orchestrator | 2025-04-14 01:28:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:12.779086 | orchestrator | 2025-04-14 01:28:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:15.823665 | orchestrator | 2025-04-14 01:28:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:15.823811 | orchestrator | 2025-04-14 01:28:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:18.871379 | orchestrator | 2025-04-14 01:28:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:18.871519 | orchestrator | 2025-04-14 01:28:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:21.914336 | orchestrator | 2025-04-14 01:28:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:21.914481 | orchestrator | 2025-04-14 01:28:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:24.966440 | orchestrator | 2025-04-14 01:28:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:24.966623 | orchestrator | 2025-04-14 01:28:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:28.019688 | orchestrator | 2025-04-14 01:28:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:28.019825 | orchestrator | 2025-04-14 01:28:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:31.063474 | orchestrator | 2025-04-14 01:28:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:31.063647 | orchestrator | 2025-04-14 01:28:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:34.123411 | orchestrator | 2025-04-14 01:28:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:34.123534 | orchestrator | 2025-04-14 01:28:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:37.182768 | orchestrator | 2025-04-14 01:28:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:37.182914 | orchestrator | 2025-04-14 01:28:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:40.227039 | orchestrator | 2025-04-14 01:28:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:40.227176 | orchestrator | 2025-04-14 01:28:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:43.272296 | orchestrator | 2025-04-14 01:28:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:43.272443 | orchestrator | 2025-04-14 01:28:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:46.322079 | orchestrator | 2025-04-14 01:28:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:46.322222 | orchestrator | 2025-04-14 01:28:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:49.364253 | orchestrator | 2025-04-14 01:28:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:49.364391 | orchestrator | 2025-04-14 01:28:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:52.420882 | orchestrator | 2025-04-14 01:28:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:52.421027 | orchestrator | 2025-04-14 01:28:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:55.475664 | orchestrator | 2025-04-14 01:28:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:55.475811 | orchestrator | 2025-04-14 01:28:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:28:58.524274 | orchestrator | 2025-04-14 01:28:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:28:58.524412 | orchestrator | 2025-04-14 01:28:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:01.572687 | orchestrator | 2025-04-14 01:28:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:01.572826 | orchestrator | 2025-04-14 01:29:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:04.621306 | orchestrator | 2025-04-14 01:29:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:04.621439 | orchestrator | 2025-04-14 01:29:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:07.671301 | orchestrator | 2025-04-14 01:29:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:07.671476 | orchestrator | 2025-04-14 01:29:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:10.727738 | orchestrator | 2025-04-14 01:29:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:10.727876 | orchestrator | 2025-04-14 01:29:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:13.782265 | orchestrator | 2025-04-14 01:29:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:13.782414 | orchestrator | 2025-04-14 01:29:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:16.834359 | orchestrator | 2025-04-14 01:29:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:16.834470 | orchestrator | 2025-04-14 01:29:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:19.879637 | orchestrator | 2025-04-14 01:29:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:19.879791 | orchestrator | 2025-04-14 01:29:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:22.937805 | orchestrator | 2025-04-14 01:29:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:22.937950 | orchestrator | 2025-04-14 01:29:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:25.988740 | orchestrator | 2025-04-14 01:29:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:25.988881 | orchestrator | 2025-04-14 01:29:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:29.035774 | orchestrator | 2025-04-14 01:29:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:29.035914 | orchestrator | 2025-04-14 01:29:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:32.084030 | orchestrator | 2025-04-14 01:29:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:32.084178 | orchestrator | 2025-04-14 01:29:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:35.133491 | orchestrator | 2025-04-14 01:29:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:35.133701 | orchestrator | 2025-04-14 01:29:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:38.179814 | orchestrator | 2025-04-14 01:29:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:38.179962 | orchestrator | 2025-04-14 01:29:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:41.238127 | orchestrator | 2025-04-14 01:29:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:41.238309 | orchestrator | 2025-04-14 01:29:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:44.288986 | orchestrator | 2025-04-14 01:29:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:44.289149 | orchestrator | 2025-04-14 01:29:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:47.342303 | orchestrator | 2025-04-14 01:29:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:47.342439 | orchestrator | 2025-04-14 01:29:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:50.392859 | orchestrator | 2025-04-14 01:29:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:50.393025 | orchestrator | 2025-04-14 01:29:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:53.442523 | orchestrator | 2025-04-14 01:29:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:53.442733 | orchestrator | 2025-04-14 01:29:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:56.496912 | orchestrator | 2025-04-14 01:29:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:56.497058 | orchestrator | 2025-04-14 01:29:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:29:59.543361 | orchestrator | 2025-04-14 01:29:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:29:59.543499 | orchestrator | 2025-04-14 01:29:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:02.586427 | orchestrator | 2025-04-14 01:29:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:02.586566 | orchestrator | 2025-04-14 01:30:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:05.630113 | orchestrator | 2025-04-14 01:30:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:05.630203 | orchestrator | 2025-04-14 01:30:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:08.681469 | orchestrator | 2025-04-14 01:30:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:08.681681 | orchestrator | 2025-04-14 01:30:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:11.741227 | orchestrator | 2025-04-14 01:30:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:11.741366 | orchestrator | 2025-04-14 01:30:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:14.801517 | orchestrator | 2025-04-14 01:30:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:14.801727 | orchestrator | 2025-04-14 01:30:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:17.847217 | orchestrator | 2025-04-14 01:30:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:17.847353 | orchestrator | 2025-04-14 01:30:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:20.901500 | orchestrator | 2025-04-14 01:30:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:20.901726 | orchestrator | 2025-04-14 01:30:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:23.948995 | orchestrator | 2025-04-14 01:30:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:23.949167 | orchestrator | 2025-04-14 01:30:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:27.001757 | orchestrator | 2025-04-14 01:30:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:27.001902 | orchestrator | 2025-04-14 01:30:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:30.059099 | orchestrator | 2025-04-14 01:30:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:30.059238 | orchestrator | 2025-04-14 01:30:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:33.103845 | orchestrator | 2025-04-14 01:30:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:33.104016 | orchestrator | 2025-04-14 01:30:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:36.159009 | orchestrator | 2025-04-14 01:30:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:36.159151 | orchestrator | 2025-04-14 01:30:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:39.209157 | orchestrator | 2025-04-14 01:30:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:39.209301 | orchestrator | 2025-04-14 01:30:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:42.260323 | orchestrator | 2025-04-14 01:30:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:42.260459 | orchestrator | 2025-04-14 01:30:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:45.311366 | orchestrator | 2025-04-14 01:30:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:45.311551 | orchestrator | 2025-04-14 01:30:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:48.355316 | orchestrator | 2025-04-14 01:30:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:48.355455 | orchestrator | 2025-04-14 01:30:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:51.407166 | orchestrator | 2025-04-14 01:30:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:51.407287 | orchestrator | 2025-04-14 01:30:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:54.460582 | orchestrator | 2025-04-14 01:30:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:54.460748 | orchestrator | 2025-04-14 01:30:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:30:57.505746 | orchestrator | 2025-04-14 01:30:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:30:57.505901 | orchestrator | 2025-04-14 01:30:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:00.555618 | orchestrator | 2025-04-14 01:30:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:00.555823 | orchestrator | 2025-04-14 01:31:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:03.600765 | orchestrator | 2025-04-14 01:31:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:03.600909 | orchestrator | 2025-04-14 01:31:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:06.651118 | orchestrator | 2025-04-14 01:31:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:06.651240 | orchestrator | 2025-04-14 01:31:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:09.700295 | orchestrator | 2025-04-14 01:31:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:09.700438 | orchestrator | 2025-04-14 01:31:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:12.755091 | orchestrator | 2025-04-14 01:31:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:12.755204 | orchestrator | 2025-04-14 01:31:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:15.806472 | orchestrator | 2025-04-14 01:31:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:15.806623 | orchestrator | 2025-04-14 01:31:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:18.847291 | orchestrator | 2025-04-14 01:31:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:18.847426 | orchestrator | 2025-04-14 01:31:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:21.901631 | orchestrator | 2025-04-14 01:31:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:21.901847 | orchestrator | 2025-04-14 01:31:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:24.945846 | orchestrator | 2025-04-14 01:31:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:24.945958 | orchestrator | 2025-04-14 01:31:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:27.991585 | orchestrator | 2025-04-14 01:31:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:27.991796 | orchestrator | 2025-04-14 01:31:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:31.050322 | orchestrator | 2025-04-14 01:31:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:31.050469 | orchestrator | 2025-04-14 01:31:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:34.092971 | orchestrator | 2025-04-14 01:31:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:34.093076 | orchestrator | 2025-04-14 01:31:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:37.140145 | orchestrator | 2025-04-14 01:31:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:37.140267 | orchestrator | 2025-04-14 01:31:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:40.185582 | orchestrator | 2025-04-14 01:31:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:40.185806 | orchestrator | 2025-04-14 01:31:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:43.231088 | orchestrator | 2025-04-14 01:31:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:43.231226 | orchestrator | 2025-04-14 01:31:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:46.273073 | orchestrator | 2025-04-14 01:31:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:46.273230 | orchestrator | 2025-04-14 01:31:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:49.338846 | orchestrator | 2025-04-14 01:31:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:49.338985 | orchestrator | 2025-04-14 01:31:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:52.387250 | orchestrator | 2025-04-14 01:31:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:52.387396 | orchestrator | 2025-04-14 01:31:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:55.436002 | orchestrator | 2025-04-14 01:31:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:55.436141 | orchestrator | 2025-04-14 01:31:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:31:58.485028 | orchestrator | 2025-04-14 01:31:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:31:58.485169 | orchestrator | 2025-04-14 01:31:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:01.529238 | orchestrator | 2025-04-14 01:31:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:01.529373 | orchestrator | 2025-04-14 01:32:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:04.586957 | orchestrator | 2025-04-14 01:32:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:04.587097 | orchestrator | 2025-04-14 01:32:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:07.637199 | orchestrator | 2025-04-14 01:32:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:07.637342 | orchestrator | 2025-04-14 01:32:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:10.687037 | orchestrator | 2025-04-14 01:32:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:10.687144 | orchestrator | 2025-04-14 01:32:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:13.736736 | orchestrator | 2025-04-14 01:32:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:13.736934 | orchestrator | 2025-04-14 01:32:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:16.786079 | orchestrator | 2025-04-14 01:32:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:16.786222 | orchestrator | 2025-04-14 01:32:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:19.846214 | orchestrator | 2025-04-14 01:32:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:19.846358 | orchestrator | 2025-04-14 01:32:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:22.892300 | orchestrator | 2025-04-14 01:32:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:22.892442 | orchestrator | 2025-04-14 01:32:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:25.932269 | orchestrator | 2025-04-14 01:32:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:25.932404 | orchestrator | 2025-04-14 01:32:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:28.977158 | orchestrator | 2025-04-14 01:32:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:28.977307 | orchestrator | 2025-04-14 01:32:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:32.032432 | orchestrator | 2025-04-14 01:32:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:32.032577 | orchestrator | 2025-04-14 01:32:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:35.081437 | orchestrator | 2025-04-14 01:32:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:35.081562 | orchestrator | 2025-04-14 01:32:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:38.132539 | orchestrator | 2025-04-14 01:32:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:38.132689 | orchestrator | 2025-04-14 01:32:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:41.182108 | orchestrator | 2025-04-14 01:32:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:41.182210 | orchestrator | 2025-04-14 01:32:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:44.226388 | orchestrator | 2025-04-14 01:32:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:44.226512 | orchestrator | 2025-04-14 01:32:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:47.276868 | orchestrator | 2025-04-14 01:32:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:47.277040 | orchestrator | 2025-04-14 01:32:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:50.331106 | orchestrator | 2025-04-14 01:32:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:50.331206 | orchestrator | 2025-04-14 01:32:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:53.383454 | orchestrator | 2025-04-14 01:32:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:53.383591 | orchestrator | 2025-04-14 01:32:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:56.432316 | orchestrator | 2025-04-14 01:32:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:56.432460 | orchestrator | 2025-04-14 01:32:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:59.477161 | orchestrator | 2025-04-14 01:32:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:32:59.477314 | orchestrator | 2025-04-14 01:32:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:32:59.478493 | orchestrator | 2025-04-14 01:32:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:02.532066 | orchestrator | 2025-04-14 01:33:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:05.578872 | orchestrator | 2025-04-14 01:33:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:05.579013 | orchestrator | 2025-04-14 01:33:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:08.629074 | orchestrator | 2025-04-14 01:33:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:08.629234 | orchestrator | 2025-04-14 01:33:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:11.670467 | orchestrator | 2025-04-14 01:33:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:11.670605 | orchestrator | 2025-04-14 01:33:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:14.715874 | orchestrator | 2025-04-14 01:33:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:14.716019 | orchestrator | 2025-04-14 01:33:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:17.765750 | orchestrator | 2025-04-14 01:33:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:17.765906 | orchestrator | 2025-04-14 01:33:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:20.813707 | orchestrator | 2025-04-14 01:33:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:20.813879 | orchestrator | 2025-04-14 01:33:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:23.858580 | orchestrator | 2025-04-14 01:33:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:23.858720 | orchestrator | 2025-04-14 01:33:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:26.909585 | orchestrator | 2025-04-14 01:33:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:26.909727 | orchestrator | 2025-04-14 01:33:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:29.957050 | orchestrator | 2025-04-14 01:33:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:29.957182 | orchestrator | 2025-04-14 01:33:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:33.010841 | orchestrator | 2025-04-14 01:33:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:33.010962 | orchestrator | 2025-04-14 01:33:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:36.061914 | orchestrator | 2025-04-14 01:33:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:36.062141 | orchestrator | 2025-04-14 01:33:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:39.117248 | orchestrator | 2025-04-14 01:33:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:39.117400 | orchestrator | 2025-04-14 01:33:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:42.166219 | orchestrator | 2025-04-14 01:33:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:42.166346 | orchestrator | 2025-04-14 01:33:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:45.215358 | orchestrator | 2025-04-14 01:33:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:45.215529 | orchestrator | 2025-04-14 01:33:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:48.263100 | orchestrator | 2025-04-14 01:33:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:48.263276 | orchestrator | 2025-04-14 01:33:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:51.320541 | orchestrator | 2025-04-14 01:33:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:51.320686 | orchestrator | 2025-04-14 01:33:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:54.364246 | orchestrator | 2025-04-14 01:33:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:54.364387 | orchestrator | 2025-04-14 01:33:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:33:57.405243 | orchestrator | 2025-04-14 01:33:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:33:57.405388 | orchestrator | 2025-04-14 01:33:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:00.459257 | orchestrator | 2025-04-14 01:33:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:00.459396 | orchestrator | 2025-04-14 01:34:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:03.514246 | orchestrator | 2025-04-14 01:34:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:03.514390 | orchestrator | 2025-04-14 01:34:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:06.566355 | orchestrator | 2025-04-14 01:34:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:06.566475 | orchestrator | 2025-04-14 01:34:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:09.628966 | orchestrator | 2025-04-14 01:34:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:09.629137 | orchestrator | 2025-04-14 01:34:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:12.684923 | orchestrator | 2025-04-14 01:34:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:12.685087 | orchestrator | 2025-04-14 01:34:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:15.724765 | orchestrator | 2025-04-14 01:34:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:15.724968 | orchestrator | 2025-04-14 01:34:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:18.774407 | orchestrator | 2025-04-14 01:34:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:18.774527 | orchestrator | 2025-04-14 01:34:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:21.826202 | orchestrator | 2025-04-14 01:34:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:21.826348 | orchestrator | 2025-04-14 01:34:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:24.879072 | orchestrator | 2025-04-14 01:34:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:24.879246 | orchestrator | 2025-04-14 01:34:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:27.932210 | orchestrator | 2025-04-14 01:34:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:27.932306 | orchestrator | 2025-04-14 01:34:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:30.985733 | orchestrator | 2025-04-14 01:34:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:30.985959 | orchestrator | 2025-04-14 01:34:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:34.043203 | orchestrator | 2025-04-14 01:34:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:34.043377 | orchestrator | 2025-04-14 01:34:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:37.087975 | orchestrator | 2025-04-14 01:34:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:37.088087 | orchestrator | 2025-04-14 01:34:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:40.143955 | orchestrator | 2025-04-14 01:34:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:40.144111 | orchestrator | 2025-04-14 01:34:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:43.192934 | orchestrator | 2025-04-14 01:34:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:43.193072 | orchestrator | 2025-04-14 01:34:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:46.234089 | orchestrator | 2025-04-14 01:34:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:46.234232 | orchestrator | 2025-04-14 01:34:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:49.280534 | orchestrator | 2025-04-14 01:34:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:49.280678 | orchestrator | 2025-04-14 01:34:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:52.333936 | orchestrator | 2025-04-14 01:34:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:52.334145 | orchestrator | 2025-04-14 01:34:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:55.389841 | orchestrator | 2025-04-14 01:34:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:55.390109 | orchestrator | 2025-04-14 01:34:55 | INFO  | Task e656b056-4ac7-468e-917d-9387742be84a is in state STARTED 2025-04-14 01:34:55.391949 | orchestrator | 2025-04-14 01:34:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:34:58.442997 | orchestrator | 2025-04-14 01:34:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:34:58.443093 | orchestrator | 2025-04-14 01:34:58 | INFO  | Task e656b056-4ac7-468e-917d-9387742be84a is in state STARTED 2025-04-14 01:34:58.445032 | orchestrator | 2025-04-14 01:34:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:01.506965 | orchestrator | 2025-04-14 01:34:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:01.507127 | orchestrator | 2025-04-14 01:35:01 | INFO  | Task e656b056-4ac7-468e-917d-9387742be84a is in state STARTED 2025-04-14 01:35:01.509386 | orchestrator | 2025-04-14 01:35:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:04.574297 | orchestrator | 2025-04-14 01:35:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:04.574423 | orchestrator | 2025-04-14 01:35:04 | INFO  | Task e656b056-4ac7-468e-917d-9387742be84a is in state STARTED 2025-04-14 01:35:04.575995 | orchestrator | 2025-04-14 01:35:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:04.576220 | orchestrator | 2025-04-14 01:35:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:07.621363 | orchestrator | 2025-04-14 01:35:07 | INFO  | Task e656b056-4ac7-468e-917d-9387742be84a is in state SUCCESS 2025-04-14 01:35:07.622415 | orchestrator | 2025-04-14 01:35:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:10.674974 | orchestrator | 2025-04-14 01:35:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:10.675129 | orchestrator | 2025-04-14 01:35:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:13.723840 | orchestrator | 2025-04-14 01:35:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:13.724034 | orchestrator | 2025-04-14 01:35:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:16.768131 | orchestrator | 2025-04-14 01:35:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:16.768275 | orchestrator | 2025-04-14 01:35:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:19.813522 | orchestrator | 2025-04-14 01:35:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:19.813678 | orchestrator | 2025-04-14 01:35:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:22.859356 | orchestrator | 2025-04-14 01:35:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:22.859484 | orchestrator | 2025-04-14 01:35:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:25.907664 | orchestrator | 2025-04-14 01:35:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:25.907762 | orchestrator | 2025-04-14 01:35:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:28.951821 | orchestrator | 2025-04-14 01:35:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:28.952042 | orchestrator | 2025-04-14 01:35:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:32.005848 | orchestrator | 2025-04-14 01:35:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:32.006136 | orchestrator | 2025-04-14 01:35:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:35.054997 | orchestrator | 2025-04-14 01:35:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:35.055140 | orchestrator | 2025-04-14 01:35:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:38.097466 | orchestrator | 2025-04-14 01:35:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:38.097561 | orchestrator | 2025-04-14 01:35:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:41.157550 | orchestrator | 2025-04-14 01:35:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:41.157725 | orchestrator | 2025-04-14 01:35:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:44.211083 | orchestrator | 2025-04-14 01:35:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:44.211216 | orchestrator | 2025-04-14 01:35:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:47.265491 | orchestrator | 2025-04-14 01:35:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:47.265634 | orchestrator | 2025-04-14 01:35:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:50.309415 | orchestrator | 2025-04-14 01:35:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:50.309534 | orchestrator | 2025-04-14 01:35:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:53.356469 | orchestrator | 2025-04-14 01:35:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:53.356618 | orchestrator | 2025-04-14 01:35:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:56.400353 | orchestrator | 2025-04-14 01:35:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:56.400486 | orchestrator | 2025-04-14 01:35:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:35:59.450347 | orchestrator | 2025-04-14 01:35:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:35:59.450476 | orchestrator | 2025-04-14 01:35:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:02.506450 | orchestrator | 2025-04-14 01:35:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:02.506622 | orchestrator | 2025-04-14 01:36:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:05.557478 | orchestrator | 2025-04-14 01:36:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:05.557649 | orchestrator | 2025-04-14 01:36:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:08.597980 | orchestrator | 2025-04-14 01:36:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:08.598133 | orchestrator | 2025-04-14 01:36:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:11.646608 | orchestrator | 2025-04-14 01:36:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:11.646752 | orchestrator | 2025-04-14 01:36:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:14.694607 | orchestrator | 2025-04-14 01:36:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:14.694718 | orchestrator | 2025-04-14 01:36:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:17.738874 | orchestrator | 2025-04-14 01:36:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:17.739051 | orchestrator | 2025-04-14 01:36:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:20.784422 | orchestrator | 2025-04-14 01:36:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:20.784517 | orchestrator | 2025-04-14 01:36:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:23.830800 | orchestrator | 2025-04-14 01:36:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:23.831043 | orchestrator | 2025-04-14 01:36:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:26.884872 | orchestrator | 2025-04-14 01:36:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:26.885059 | orchestrator | 2025-04-14 01:36:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:29.936267 | orchestrator | 2025-04-14 01:36:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:29.936445 | orchestrator | 2025-04-14 01:36:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:32.990813 | orchestrator | 2025-04-14 01:36:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:32.990989 | orchestrator | 2025-04-14 01:36:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:36.042190 | orchestrator | 2025-04-14 01:36:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:36.042359 | orchestrator | 2025-04-14 01:36:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:39.095715 | orchestrator | 2025-04-14 01:36:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:39.095850 | orchestrator | 2025-04-14 01:36:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:42.144194 | orchestrator | 2025-04-14 01:36:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:42.144357 | orchestrator | 2025-04-14 01:36:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:45.190963 | orchestrator | 2025-04-14 01:36:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:45.191100 | orchestrator | 2025-04-14 01:36:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:48.252434 | orchestrator | 2025-04-14 01:36:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:48.252619 | orchestrator | 2025-04-14 01:36:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:51.297754 | orchestrator | 2025-04-14 01:36:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:51.297990 | orchestrator | 2025-04-14 01:36:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:54.346860 | orchestrator | 2025-04-14 01:36:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:54.347046 | orchestrator | 2025-04-14 01:36:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:36:57.394788 | orchestrator | 2025-04-14 01:36:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:36:57.394979 | orchestrator | 2025-04-14 01:36:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:00.443307 | orchestrator | 2025-04-14 01:36:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:00.443452 | orchestrator | 2025-04-14 01:37:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:03.493750 | orchestrator | 2025-04-14 01:37:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:03.493886 | orchestrator | 2025-04-14 01:37:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:06.532057 | orchestrator | 2025-04-14 01:37:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:06.532198 | orchestrator | 2025-04-14 01:37:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:09.589825 | orchestrator | 2025-04-14 01:37:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:09.589985 | orchestrator | 2025-04-14 01:37:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:12.641199 | orchestrator | 2025-04-14 01:37:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:12.642149 | orchestrator | 2025-04-14 01:37:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:15.688019 | orchestrator | 2025-04-14 01:37:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:15.688131 | orchestrator | 2025-04-14 01:37:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:18.743650 | orchestrator | 2025-04-14 01:37:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:18.743742 | orchestrator | 2025-04-14 01:37:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:21.792033 | orchestrator | 2025-04-14 01:37:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:21.792175 | orchestrator | 2025-04-14 01:37:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:24.837911 | orchestrator | 2025-04-14 01:37:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:24.838219 | orchestrator | 2025-04-14 01:37:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:27.886720 | orchestrator | 2025-04-14 01:37:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:27.886829 | orchestrator | 2025-04-14 01:37:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:30.936395 | orchestrator | 2025-04-14 01:37:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:30.936544 | orchestrator | 2025-04-14 01:37:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:33.991552 | orchestrator | 2025-04-14 01:37:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:33.991723 | orchestrator | 2025-04-14 01:37:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:37.040143 | orchestrator | 2025-04-14 01:37:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:37.040289 | orchestrator | 2025-04-14 01:37:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:40.089128 | orchestrator | 2025-04-14 01:37:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:40.089271 | orchestrator | 2025-04-14 01:37:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:43.135435 | orchestrator | 2025-04-14 01:37:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:43.135546 | orchestrator | 2025-04-14 01:37:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:46.179746 | orchestrator | 2025-04-14 01:37:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:46.179887 | orchestrator | 2025-04-14 01:37:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:49.232402 | orchestrator | 2025-04-14 01:37:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:49.232544 | orchestrator | 2025-04-14 01:37:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:52.276465 | orchestrator | 2025-04-14 01:37:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:52.276633 | orchestrator | 2025-04-14 01:37:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:55.327102 | orchestrator | 2025-04-14 01:37:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:55.327272 | orchestrator | 2025-04-14 01:37:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:37:58.368066 | orchestrator | 2025-04-14 01:37:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:37:58.368206 | orchestrator | 2025-04-14 01:37:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:01.410555 | orchestrator | 2025-04-14 01:37:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:01.410689 | orchestrator | 2025-04-14 01:38:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:04.465613 | orchestrator | 2025-04-14 01:38:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:04.465762 | orchestrator | 2025-04-14 01:38:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:07.511663 | orchestrator | 2025-04-14 01:38:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:07.511807 | orchestrator | 2025-04-14 01:38:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:10.560679 | orchestrator | 2025-04-14 01:38:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:10.560849 | orchestrator | 2025-04-14 01:38:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:13.607773 | orchestrator | 2025-04-14 01:38:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:13.607987 | orchestrator | 2025-04-14 01:38:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:16.659197 | orchestrator | 2025-04-14 01:38:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:16.659349 | orchestrator | 2025-04-14 01:38:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:19.711673 | orchestrator | 2025-04-14 01:38:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:19.711812 | orchestrator | 2025-04-14 01:38:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:22.759710 | orchestrator | 2025-04-14 01:38:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:22.759850 | orchestrator | 2025-04-14 01:38:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:25.813531 | orchestrator | 2025-04-14 01:38:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:25.813675 | orchestrator | 2025-04-14 01:38:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:28.860360 | orchestrator | 2025-04-14 01:38:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:28.860505 | orchestrator | 2025-04-14 01:38:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:31.910768 | orchestrator | 2025-04-14 01:38:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:31.910904 | orchestrator | 2025-04-14 01:38:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:34.961291 | orchestrator | 2025-04-14 01:38:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:34.961449 | orchestrator | 2025-04-14 01:38:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:38.004055 | orchestrator | 2025-04-14 01:38:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:38.004192 | orchestrator | 2025-04-14 01:38:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:41.052933 | orchestrator | 2025-04-14 01:38:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:41.053243 | orchestrator | 2025-04-14 01:38:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:44.098338 | orchestrator | 2025-04-14 01:38:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:44.098477 | orchestrator | 2025-04-14 01:38:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:47.150380 | orchestrator | 2025-04-14 01:38:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:47.150571 | orchestrator | 2025-04-14 01:38:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:50.211530 | orchestrator | 2025-04-14 01:38:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:50.211669 | orchestrator | 2025-04-14 01:38:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:53.268020 | orchestrator | 2025-04-14 01:38:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:53.268169 | orchestrator | 2025-04-14 01:38:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:56.315817 | orchestrator | 2025-04-14 01:38:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:56.316050 | orchestrator | 2025-04-14 01:38:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:38:59.363841 | orchestrator | 2025-04-14 01:38:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:38:59.364048 | orchestrator | 2025-04-14 01:38:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:02.412462 | orchestrator | 2025-04-14 01:38:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:02.412556 | orchestrator | 2025-04-14 01:39:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:05.462413 | orchestrator | 2025-04-14 01:39:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:05.462560 | orchestrator | 2025-04-14 01:39:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:08.515616 | orchestrator | 2025-04-14 01:39:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:08.515759 | orchestrator | 2025-04-14 01:39:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:11.560898 | orchestrator | 2025-04-14 01:39:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:11.561113 | orchestrator | 2025-04-14 01:39:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:14.604666 | orchestrator | 2025-04-14 01:39:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:14.604776 | orchestrator | 2025-04-14 01:39:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:17.649439 | orchestrator | 2025-04-14 01:39:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:17.649591 | orchestrator | 2025-04-14 01:39:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:20.694756 | orchestrator | 2025-04-14 01:39:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:20.694895 | orchestrator | 2025-04-14 01:39:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:23.738938 | orchestrator | 2025-04-14 01:39:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:23.739104 | orchestrator | 2025-04-14 01:39:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:26.780909 | orchestrator | 2025-04-14 01:39:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:26.781097 | orchestrator | 2025-04-14 01:39:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:29.824823 | orchestrator | 2025-04-14 01:39:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:29.824924 | orchestrator | 2025-04-14 01:39:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:32.879554 | orchestrator | 2025-04-14 01:39:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:32.879696 | orchestrator | 2025-04-14 01:39:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:35.920207 | orchestrator | 2025-04-14 01:39:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:35.920361 | orchestrator | 2025-04-14 01:39:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:38.966791 | orchestrator | 2025-04-14 01:39:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:38.966942 | orchestrator | 2025-04-14 01:39:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:42.017487 | orchestrator | 2025-04-14 01:39:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:42.017668 | orchestrator | 2025-04-14 01:39:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:45.062382 | orchestrator | 2025-04-14 01:39:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:45.062545 | orchestrator | 2025-04-14 01:39:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:48.113071 | orchestrator | 2025-04-14 01:39:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:48.113222 | orchestrator | 2025-04-14 01:39:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:51.154272 | orchestrator | 2025-04-14 01:39:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:51.154363 | orchestrator | 2025-04-14 01:39:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:54.204306 | orchestrator | 2025-04-14 01:39:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:54.204420 | orchestrator | 2025-04-14 01:39:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:39:57.252378 | orchestrator | 2025-04-14 01:39:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:39:57.252520 | orchestrator | 2025-04-14 01:39:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:00.300131 | orchestrator | 2025-04-14 01:39:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:00.300234 | orchestrator | 2025-04-14 01:40:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:03.356712 | orchestrator | 2025-04-14 01:40:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:03.356851 | orchestrator | 2025-04-14 01:40:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:06.398391 | orchestrator | 2025-04-14 01:40:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:06.398559 | orchestrator | 2025-04-14 01:40:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:09.454722 | orchestrator | 2025-04-14 01:40:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:09.454957 | orchestrator | 2025-04-14 01:40:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:12.504807 | orchestrator | 2025-04-14 01:40:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:12.504952 | orchestrator | 2025-04-14 01:40:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:15.560402 | orchestrator | 2025-04-14 01:40:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:15.560538 | orchestrator | 2025-04-14 01:40:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:18.615013 | orchestrator | 2025-04-14 01:40:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:18.615130 | orchestrator | 2025-04-14 01:40:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:21.661829 | orchestrator | 2025-04-14 01:40:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:21.662068 | orchestrator | 2025-04-14 01:40:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:24.710325 | orchestrator | 2025-04-14 01:40:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:24.710484 | orchestrator | 2025-04-14 01:40:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:27.757129 | orchestrator | 2025-04-14 01:40:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:27.757269 | orchestrator | 2025-04-14 01:40:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:30.805432 | orchestrator | 2025-04-14 01:40:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:30.805626 | orchestrator | 2025-04-14 01:40:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:33.861417 | orchestrator | 2025-04-14 01:40:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:33.861543 | orchestrator | 2025-04-14 01:40:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:36.907445 | orchestrator | 2025-04-14 01:40:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:36.907632 | orchestrator | 2025-04-14 01:40:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:39.961077 | orchestrator | 2025-04-14 01:40:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:39.961231 | orchestrator | 2025-04-14 01:40:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:43.018289 | orchestrator | 2025-04-14 01:40:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:43.018451 | orchestrator | 2025-04-14 01:40:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:46.062093 | orchestrator | 2025-04-14 01:40:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:46.062240 | orchestrator | 2025-04-14 01:40:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:49.108202 | orchestrator | 2025-04-14 01:40:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:49.108350 | orchestrator | 2025-04-14 01:40:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:52.151670 | orchestrator | 2025-04-14 01:40:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:52.151769 | orchestrator | 2025-04-14 01:40:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:55.202594 | orchestrator | 2025-04-14 01:40:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:55.202738 | orchestrator | 2025-04-14 01:40:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:40:58.256298 | orchestrator | 2025-04-14 01:40:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:40:58.256441 | orchestrator | 2025-04-14 01:40:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:01.311359 | orchestrator | 2025-04-14 01:40:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:01.311527 | orchestrator | 2025-04-14 01:41:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:04.368205 | orchestrator | 2025-04-14 01:41:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:04.368326 | orchestrator | 2025-04-14 01:41:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:07.417959 | orchestrator | 2025-04-14 01:41:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:07.418207 | orchestrator | 2025-04-14 01:41:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:10.468598 | orchestrator | 2025-04-14 01:41:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:10.468716 | orchestrator | 2025-04-14 01:41:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:13.516284 | orchestrator | 2025-04-14 01:41:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:13.516428 | orchestrator | 2025-04-14 01:41:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:16.573601 | orchestrator | 2025-04-14 01:41:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:16.573743 | orchestrator | 2025-04-14 01:41:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:19.623872 | orchestrator | 2025-04-14 01:41:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:19.624074 | orchestrator | 2025-04-14 01:41:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:22.670856 | orchestrator | 2025-04-14 01:41:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:22.671047 | orchestrator | 2025-04-14 01:41:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:25.724354 | orchestrator | 2025-04-14 01:41:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:25.724501 | orchestrator | 2025-04-14 01:41:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:28.769281 | orchestrator | 2025-04-14 01:41:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:28.769376 | orchestrator | 2025-04-14 01:41:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:31.813148 | orchestrator | 2025-04-14 01:41:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:31.813288 | orchestrator | 2025-04-14 01:41:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:34.864370 | orchestrator | 2025-04-14 01:41:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:34.864479 | orchestrator | 2025-04-14 01:41:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:37.907167 | orchestrator | 2025-04-14 01:41:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:37.907287 | orchestrator | 2025-04-14 01:41:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:40.956458 | orchestrator | 2025-04-14 01:41:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:40.956581 | orchestrator | 2025-04-14 01:41:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:44.008894 | orchestrator | 2025-04-14 01:41:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:44.009097 | orchestrator | 2025-04-14 01:41:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:47.056662 | orchestrator | 2025-04-14 01:41:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:47.056778 | orchestrator | 2025-04-14 01:41:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:50.107720 | orchestrator | 2025-04-14 01:41:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:50.107847 | orchestrator | 2025-04-14 01:41:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:53.163330 | orchestrator | 2025-04-14 01:41:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:53.163467 | orchestrator | 2025-04-14 01:41:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:56.214551 | orchestrator | 2025-04-14 01:41:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:56.214671 | orchestrator | 2025-04-14 01:41:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:41:59.265647 | orchestrator | 2025-04-14 01:41:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:41:59.265791 | orchestrator | 2025-04-14 01:41:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:02.314362 | orchestrator | 2025-04-14 01:41:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:02.314534 | orchestrator | 2025-04-14 01:42:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:05.363502 | orchestrator | 2025-04-14 01:42:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:05.363620 | orchestrator | 2025-04-14 01:42:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:08.411085 | orchestrator | 2025-04-14 01:42:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:08.411232 | orchestrator | 2025-04-14 01:42:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:11.470303 | orchestrator | 2025-04-14 01:42:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:11.470477 | orchestrator | 2025-04-14 01:42:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:14.522789 | orchestrator | 2025-04-14 01:42:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:14.522884 | orchestrator | 2025-04-14 01:42:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:17.571749 | orchestrator | 2025-04-14 01:42:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:17.571891 | orchestrator | 2025-04-14 01:42:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:20.624887 | orchestrator | 2025-04-14 01:42:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:20.625064 | orchestrator | 2025-04-14 01:42:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:23.671056 | orchestrator | 2025-04-14 01:42:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:23.671189 | orchestrator | 2025-04-14 01:42:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:26.717179 | orchestrator | 2025-04-14 01:42:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:26.717314 | orchestrator | 2025-04-14 01:42:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:29.765325 | orchestrator | 2025-04-14 01:42:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:29.765458 | orchestrator | 2025-04-14 01:42:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:32.812653 | orchestrator | 2025-04-14 01:42:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:32.812794 | orchestrator | 2025-04-14 01:42:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:35.866502 | orchestrator | 2025-04-14 01:42:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:35.866643 | orchestrator | 2025-04-14 01:42:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:38.913453 | orchestrator | 2025-04-14 01:42:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:38.913594 | orchestrator | 2025-04-14 01:42:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:41.967042 | orchestrator | 2025-04-14 01:42:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:41.967194 | orchestrator | 2025-04-14 01:42:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:45.009260 | orchestrator | 2025-04-14 01:42:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:45.009390 | orchestrator | 2025-04-14 01:42:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:48.053441 | orchestrator | 2025-04-14 01:42:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:48.053608 | orchestrator | 2025-04-14 01:42:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:51.113029 | orchestrator | 2025-04-14 01:42:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:51.113162 | orchestrator | 2025-04-14 01:42:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:54.155189 | orchestrator | 2025-04-14 01:42:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:54.156136 | orchestrator | 2025-04-14 01:42:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:42:57.208983 | orchestrator | 2025-04-14 01:42:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:42:57.209128 | orchestrator | 2025-04-14 01:42:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:00.253267 | orchestrator | 2025-04-14 01:42:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:00.253414 | orchestrator | 2025-04-14 01:43:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:03.313922 | orchestrator | 2025-04-14 01:43:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:03.314092 | orchestrator | 2025-04-14 01:43:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:06.361010 | orchestrator | 2025-04-14 01:43:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:06.361165 | orchestrator | 2025-04-14 01:43:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:09.408291 | orchestrator | 2025-04-14 01:43:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:09.408439 | orchestrator | 2025-04-14 01:43:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:12.456202 | orchestrator | 2025-04-14 01:43:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:12.456328 | orchestrator | 2025-04-14 01:43:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:15.505848 | orchestrator | 2025-04-14 01:43:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:15.506155 | orchestrator | 2025-04-14 01:43:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:18.548735 | orchestrator | 2025-04-14 01:43:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:18.548887 | orchestrator | 2025-04-14 01:43:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:21.594240 | orchestrator | 2025-04-14 01:43:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:21.594389 | orchestrator | 2025-04-14 01:43:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:21.594930 | orchestrator | 2025-04-14 01:43:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:24.648459 | orchestrator | 2025-04-14 01:43:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:27.705455 | orchestrator | 2025-04-14 01:43:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:27.705549 | orchestrator | 2025-04-14 01:43:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:30.753879 | orchestrator | 2025-04-14 01:43:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:30.754076 | orchestrator | 2025-04-14 01:43:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:33.807098 | orchestrator | 2025-04-14 01:43:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:33.807227 | orchestrator | 2025-04-14 01:43:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:36.857074 | orchestrator | 2025-04-14 01:43:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:36.857206 | orchestrator | 2025-04-14 01:43:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:39.908894 | orchestrator | 2025-04-14 01:43:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:39.909088 | orchestrator | 2025-04-14 01:43:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:42.965433 | orchestrator | 2025-04-14 01:43:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:42.965626 | orchestrator | 2025-04-14 01:43:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:46.021988 | orchestrator | 2025-04-14 01:43:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:46.022203 | orchestrator | 2025-04-14 01:43:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:49.077787 | orchestrator | 2025-04-14 01:43:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:49.077915 | orchestrator | 2025-04-14 01:43:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:49.078671 | orchestrator | 2025-04-14 01:43:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:52.130767 | orchestrator | 2025-04-14 01:43:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:55.183467 | orchestrator | 2025-04-14 01:43:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:55.183648 | orchestrator | 2025-04-14 01:43:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:43:58.232213 | orchestrator | 2025-04-14 01:43:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:43:58.232347 | orchestrator | 2025-04-14 01:43:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:01.285041 | orchestrator | 2025-04-14 01:43:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:01.285171 | orchestrator | 2025-04-14 01:44:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:04.334263 | orchestrator | 2025-04-14 01:44:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:04.334405 | orchestrator | 2025-04-14 01:44:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:07.383106 | orchestrator | 2025-04-14 01:44:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:07.383250 | orchestrator | 2025-04-14 01:44:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:10.430450 | orchestrator | 2025-04-14 01:44:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:10.430584 | orchestrator | 2025-04-14 01:44:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:13.486763 | orchestrator | 2025-04-14 01:44:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:13.487007 | orchestrator | 2025-04-14 01:44:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:16.530130 | orchestrator | 2025-04-14 01:44:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:16.530304 | orchestrator | 2025-04-14 01:44:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:19.572173 | orchestrator | 2025-04-14 01:44:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:19.572324 | orchestrator | 2025-04-14 01:44:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:22.624206 | orchestrator | 2025-04-14 01:44:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:22.624344 | orchestrator | 2025-04-14 01:44:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:25.671734 | orchestrator | 2025-04-14 01:44:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:25.671907 | orchestrator | 2025-04-14 01:44:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:28.728222 | orchestrator | 2025-04-14 01:44:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:28.728433 | orchestrator | 2025-04-14 01:44:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:31.775037 | orchestrator | 2025-04-14 01:44:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:31.775182 | orchestrator | 2025-04-14 01:44:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:34.819983 | orchestrator | 2025-04-14 01:44:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:34.820127 | orchestrator | 2025-04-14 01:44:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:37.870287 | orchestrator | 2025-04-14 01:44:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:37.870429 | orchestrator | 2025-04-14 01:44:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:40.923297 | orchestrator | 2025-04-14 01:44:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:40.923427 | orchestrator | 2025-04-14 01:44:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:43.969122 | orchestrator | 2025-04-14 01:44:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:43.969267 | orchestrator | 2025-04-14 01:44:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:47.026991 | orchestrator | 2025-04-14 01:44:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:47.027084 | orchestrator | 2025-04-14 01:44:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:50.081997 | orchestrator | 2025-04-14 01:44:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:50.082191 | orchestrator | 2025-04-14 01:44:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:53.129525 | orchestrator | 2025-04-14 01:44:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:53.129664 | orchestrator | 2025-04-14 01:44:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:56.199539 | orchestrator | 2025-04-14 01:44:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:56.199731 | orchestrator | 2025-04-14 01:44:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:44:56.202397 | orchestrator | 2025-04-14 01:44:56 | INFO  | Task 54ef94a8-2f50-4d3d-a8e4-6d0716d91855 is in state STARTED 2025-04-14 01:44:56.202780 | orchestrator | 2025-04-14 01:44:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:44:59.271613 | orchestrator | 2025-04-14 01:44:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:02.331045 | orchestrator | 2025-04-14 01:44:59 | INFO  | Task 54ef94a8-2f50-4d3d-a8e4-6d0716d91855 is in state STARTED 2025-04-14 01:45:02.331171 | orchestrator | 2025-04-14 01:44:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:02.331192 | orchestrator | 2025-04-14 01:45:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:02.331528 | orchestrator | 2025-04-14 01:45:02 | INFO  | Task 54ef94a8-2f50-4d3d-a8e4-6d0716d91855 is in state STARTED 2025-04-14 01:45:02.331648 | orchestrator | 2025-04-14 01:45:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:05.395227 | orchestrator | 2025-04-14 01:45:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:05.395863 | orchestrator | 2025-04-14 01:45:05 | INFO  | Task 54ef94a8-2f50-4d3d-a8e4-6d0716d91855 is in state SUCCESS 2025-04-14 01:45:08.434523 | orchestrator | 2025-04-14 01:45:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:08.434670 | orchestrator | 2025-04-14 01:45:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:11.474935 | orchestrator | 2025-04-14 01:45:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:11.475053 | orchestrator | 2025-04-14 01:45:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:14.534676 | orchestrator | 2025-04-14 01:45:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:14.534827 | orchestrator | 2025-04-14 01:45:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:17.579828 | orchestrator | 2025-04-14 01:45:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:17.580011 | orchestrator | 2025-04-14 01:45:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:20.628184 | orchestrator | 2025-04-14 01:45:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:20.628325 | orchestrator | 2025-04-14 01:45:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:23.676680 | orchestrator | 2025-04-14 01:45:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:23.676814 | orchestrator | 2025-04-14 01:45:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:26.717535 | orchestrator | 2025-04-14 01:45:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:26.717701 | orchestrator | 2025-04-14 01:45:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:29.778701 | orchestrator | 2025-04-14 01:45:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:29.778855 | orchestrator | 2025-04-14 01:45:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:32.825202 | orchestrator | 2025-04-14 01:45:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:32.825348 | orchestrator | 2025-04-14 01:45:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:35.874194 | orchestrator | 2025-04-14 01:45:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:35.874340 | orchestrator | 2025-04-14 01:45:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:38.926100 | orchestrator | 2025-04-14 01:45:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:38.926249 | orchestrator | 2025-04-14 01:45:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:41.974928 | orchestrator | 2025-04-14 01:45:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:41.975109 | orchestrator | 2025-04-14 01:45:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:45.027368 | orchestrator | 2025-04-14 01:45:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:45.027536 | orchestrator | 2025-04-14 01:45:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:48.071243 | orchestrator | 2025-04-14 01:45:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:48.071387 | orchestrator | 2025-04-14 01:45:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:51.118788 | orchestrator | 2025-04-14 01:45:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:51.118997 | orchestrator | 2025-04-14 01:45:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:54.162179 | orchestrator | 2025-04-14 01:45:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:54.162326 | orchestrator | 2025-04-14 01:45:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:45:57.215459 | orchestrator | 2025-04-14 01:45:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:45:57.215610 | orchestrator | 2025-04-14 01:45:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:00.260391 | orchestrator | 2025-04-14 01:45:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:00.260534 | orchestrator | 2025-04-14 01:46:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:03.313475 | orchestrator | 2025-04-14 01:46:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:03.313656 | orchestrator | 2025-04-14 01:46:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:06.367298 | orchestrator | 2025-04-14 01:46:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:06.367470 | orchestrator | 2025-04-14 01:46:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:09.422118 | orchestrator | 2025-04-14 01:46:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:09.422295 | orchestrator | 2025-04-14 01:46:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:12.469260 | orchestrator | 2025-04-14 01:46:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:12.469406 | orchestrator | 2025-04-14 01:46:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:15.516467 | orchestrator | 2025-04-14 01:46:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:15.516651 | orchestrator | 2025-04-14 01:46:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:18.562759 | orchestrator | 2025-04-14 01:46:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:18.562945 | orchestrator | 2025-04-14 01:46:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:21.622394 | orchestrator | 2025-04-14 01:46:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:21.622558 | orchestrator | 2025-04-14 01:46:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:24.670543 | orchestrator | 2025-04-14 01:46:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:24.670696 | orchestrator | 2025-04-14 01:46:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:27.718510 | orchestrator | 2025-04-14 01:46:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:27.718664 | orchestrator | 2025-04-14 01:46:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:30.777998 | orchestrator | 2025-04-14 01:46:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:30.778183 | orchestrator | 2025-04-14 01:46:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:33.829209 | orchestrator | 2025-04-14 01:46:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:33.829354 | orchestrator | 2025-04-14 01:46:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:36.878647 | orchestrator | 2025-04-14 01:46:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:36.878807 | orchestrator | 2025-04-14 01:46:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:39.929283 | orchestrator | 2025-04-14 01:46:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:39.929464 | orchestrator | 2025-04-14 01:46:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:42.976168 | orchestrator | 2025-04-14 01:46:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:42.976274 | orchestrator | 2025-04-14 01:46:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:46.027374 | orchestrator | 2025-04-14 01:46:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:46.027536 | orchestrator | 2025-04-14 01:46:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:49.076924 | orchestrator | 2025-04-14 01:46:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:49.077107 | orchestrator | 2025-04-14 01:46:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:52.122307 | orchestrator | 2025-04-14 01:46:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:52.122493 | orchestrator | 2025-04-14 01:46:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:55.171125 | orchestrator | 2025-04-14 01:46:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:55.171264 | orchestrator | 2025-04-14 01:46:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:46:55.172085 | orchestrator | 2025-04-14 01:46:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:46:58.221850 | orchestrator | 2025-04-14 01:46:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:01.278431 | orchestrator | 2025-04-14 01:46:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:01.278535 | orchestrator | 2025-04-14 01:47:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:04.320930 | orchestrator | 2025-04-14 01:47:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:04.321076 | orchestrator | 2025-04-14 01:47:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:07.367501 | orchestrator | 2025-04-14 01:47:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:07.367666 | orchestrator | 2025-04-14 01:47:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:10.422755 | orchestrator | 2025-04-14 01:47:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:10.422896 | orchestrator | 2025-04-14 01:47:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:13.475757 | orchestrator | 2025-04-14 01:47:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:13.475893 | orchestrator | 2025-04-14 01:47:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:16.531736 | orchestrator | 2025-04-14 01:47:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:16.531879 | orchestrator | 2025-04-14 01:47:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:19.574632 | orchestrator | 2025-04-14 01:47:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:19.574776 | orchestrator | 2025-04-14 01:47:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:22.626507 | orchestrator | 2025-04-14 01:47:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:22.626674 | orchestrator | 2025-04-14 01:47:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:25.680467 | orchestrator | 2025-04-14 01:47:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:25.680589 | orchestrator | 2025-04-14 01:47:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:28.726130 | orchestrator | 2025-04-14 01:47:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:28.726300 | orchestrator | 2025-04-14 01:47:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:31.775366 | orchestrator | 2025-04-14 01:47:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:31.775502 | orchestrator | 2025-04-14 01:47:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:34.815137 | orchestrator | 2025-04-14 01:47:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:34.815282 | orchestrator | 2025-04-14 01:47:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:37.867463 | orchestrator | 2025-04-14 01:47:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:37.867608 | orchestrator | 2025-04-14 01:47:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:40.912318 | orchestrator | 2025-04-14 01:47:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:40.912464 | orchestrator | 2025-04-14 01:47:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:43.955835 | orchestrator | 2025-04-14 01:47:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:43.956064 | orchestrator | 2025-04-14 01:47:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:47.007112 | orchestrator | 2025-04-14 01:47:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:47.007299 | orchestrator | 2025-04-14 01:47:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:50.054528 | orchestrator | 2025-04-14 01:47:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:50.054679 | orchestrator | 2025-04-14 01:47:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:53.099533 | orchestrator | 2025-04-14 01:47:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:53.099669 | orchestrator | 2025-04-14 01:47:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:56.150872 | orchestrator | 2025-04-14 01:47:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:56.151060 | orchestrator | 2025-04-14 01:47:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:47:59.208389 | orchestrator | 2025-04-14 01:47:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:47:59.208565 | orchestrator | 2025-04-14 01:47:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:02.253126 | orchestrator | 2025-04-14 01:47:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:02.253270 | orchestrator | 2025-04-14 01:48:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:05.293317 | orchestrator | 2025-04-14 01:48:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:05.293480 | orchestrator | 2025-04-14 01:48:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:08.342254 | orchestrator | 2025-04-14 01:48:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:08.342397 | orchestrator | 2025-04-14 01:48:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:11.387179 | orchestrator | 2025-04-14 01:48:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:11.387324 | orchestrator | 2025-04-14 01:48:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:14.435723 | orchestrator | 2025-04-14 01:48:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:14.435859 | orchestrator | 2025-04-14 01:48:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:17.486593 | orchestrator | 2025-04-14 01:48:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:17.486763 | orchestrator | 2025-04-14 01:48:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:20.546278 | orchestrator | 2025-04-14 01:48:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:20.546421 | orchestrator | 2025-04-14 01:48:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:23.600864 | orchestrator | 2025-04-14 01:48:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:23.601051 | orchestrator | 2025-04-14 01:48:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:26.651549 | orchestrator | 2025-04-14 01:48:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:26.651685 | orchestrator | 2025-04-14 01:48:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:29.705482 | orchestrator | 2025-04-14 01:48:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:29.705630 | orchestrator | 2025-04-14 01:48:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:32.764498 | orchestrator | 2025-04-14 01:48:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:32.764637 | orchestrator | 2025-04-14 01:48:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:32.764778 | orchestrator | 2025-04-14 01:48:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:35.814502 | orchestrator | 2025-04-14 01:48:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:38.862329 | orchestrator | 2025-04-14 01:48:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:38.862430 | orchestrator | 2025-04-14 01:48:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:41.911745 | orchestrator | 2025-04-14 01:48:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:41.911888 | orchestrator | 2025-04-14 01:48:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:44.958547 | orchestrator | 2025-04-14 01:48:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:44.958710 | orchestrator | 2025-04-14 01:48:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:48.004496 | orchestrator | 2025-04-14 01:48:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:48.004674 | orchestrator | 2025-04-14 01:48:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:51.052141 | orchestrator | 2025-04-14 01:48:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:51.052265 | orchestrator | 2025-04-14 01:48:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:54.106258 | orchestrator | 2025-04-14 01:48:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:54.106487 | orchestrator | 2025-04-14 01:48:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:48:57.151008 | orchestrator | 2025-04-14 01:48:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:48:57.151105 | orchestrator | 2025-04-14 01:48:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:00.199579 | orchestrator | 2025-04-14 01:48:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:00.199727 | orchestrator | 2025-04-14 01:49:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:03.244158 | orchestrator | 2025-04-14 01:49:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:03.244300 | orchestrator | 2025-04-14 01:49:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:06.297181 | orchestrator | 2025-04-14 01:49:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:06.297359 | orchestrator | 2025-04-14 01:49:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:09.345093 | orchestrator | 2025-04-14 01:49:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:09.345229 | orchestrator | 2025-04-14 01:49:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:12.395328 | orchestrator | 2025-04-14 01:49:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:12.395505 | orchestrator | 2025-04-14 01:49:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:15.450728 | orchestrator | 2025-04-14 01:49:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:15.450866 | orchestrator | 2025-04-14 01:49:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:18.495208 | orchestrator | 2025-04-14 01:49:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:18.495356 | orchestrator | 2025-04-14 01:49:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:21.541297 | orchestrator | 2025-04-14 01:49:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:21.541438 | orchestrator | 2025-04-14 01:49:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:24.592646 | orchestrator | 2025-04-14 01:49:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:24.592812 | orchestrator | 2025-04-14 01:49:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:27.640103 | orchestrator | 2025-04-14 01:49:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:27.640246 | orchestrator | 2025-04-14 01:49:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:30.694656 | orchestrator | 2025-04-14 01:49:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:30.694812 | orchestrator | 2025-04-14 01:49:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:33.748621 | orchestrator | 2025-04-14 01:49:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:33.748795 | orchestrator | 2025-04-14 01:49:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:36.800536 | orchestrator | 2025-04-14 01:49:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:36.800685 | orchestrator | 2025-04-14 01:49:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:39.857629 | orchestrator | 2025-04-14 01:49:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:39.857770 | orchestrator | 2025-04-14 01:49:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:42.906610 | orchestrator | 2025-04-14 01:49:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:42.906756 | orchestrator | 2025-04-14 01:49:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:45.950945 | orchestrator | 2025-04-14 01:49:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:45.951103 | orchestrator | 2025-04-14 01:49:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:49.001740 | orchestrator | 2025-04-14 01:49:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:49.001840 | orchestrator | 2025-04-14 01:49:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:52.057158 | orchestrator | 2025-04-14 01:49:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:52.057279 | orchestrator | 2025-04-14 01:49:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:55.097997 | orchestrator | 2025-04-14 01:49:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:55.098213 | orchestrator | 2025-04-14 01:49:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:49:55.098371 | orchestrator | 2025-04-14 01:49:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:49:58.144427 | orchestrator | 2025-04-14 01:49:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:01.183471 | orchestrator | 2025-04-14 01:49:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:01.183621 | orchestrator | 2025-04-14 01:50:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:04.230274 | orchestrator | 2025-04-14 01:50:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:04.230428 | orchestrator | 2025-04-14 01:50:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:07.282218 | orchestrator | 2025-04-14 01:50:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:07.282366 | orchestrator | 2025-04-14 01:50:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:10.335797 | orchestrator | 2025-04-14 01:50:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:10.335942 | orchestrator | 2025-04-14 01:50:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:13.387414 | orchestrator | 2025-04-14 01:50:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:13.387563 | orchestrator | 2025-04-14 01:50:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:16.437626 | orchestrator | 2025-04-14 01:50:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:16.437787 | orchestrator | 2025-04-14 01:50:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:19.486101 | orchestrator | 2025-04-14 01:50:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:19.486284 | orchestrator | 2025-04-14 01:50:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:22.533271 | orchestrator | 2025-04-14 01:50:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:22.533402 | orchestrator | 2025-04-14 01:50:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:25.585627 | orchestrator | 2025-04-14 01:50:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:25.585800 | orchestrator | 2025-04-14 01:50:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:28.639675 | orchestrator | 2025-04-14 01:50:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:28.639812 | orchestrator | 2025-04-14 01:50:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:31.690827 | orchestrator | 2025-04-14 01:50:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:31.690966 | orchestrator | 2025-04-14 01:50:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:34.737157 | orchestrator | 2025-04-14 01:50:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:34.737291 | orchestrator | 2025-04-14 01:50:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:37.785446 | orchestrator | 2025-04-14 01:50:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:37.785629 | orchestrator | 2025-04-14 01:50:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:40.834346 | orchestrator | 2025-04-14 01:50:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:40.834506 | orchestrator | 2025-04-14 01:50:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:43.884272 | orchestrator | 2025-04-14 01:50:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:43.884409 | orchestrator | 2025-04-14 01:50:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:46.929192 | orchestrator | 2025-04-14 01:50:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:46.929335 | orchestrator | 2025-04-14 01:50:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:49.976123 | orchestrator | 2025-04-14 01:50:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:49.976319 | orchestrator | 2025-04-14 01:50:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:53.033634 | orchestrator | 2025-04-14 01:50:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:53.033778 | orchestrator | 2025-04-14 01:50:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:56.085524 | orchestrator | 2025-04-14 01:50:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:56.085664 | orchestrator | 2025-04-14 01:50:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:50:59.136055 | orchestrator | 2025-04-14 01:50:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:50:59.136250 | orchestrator | 2025-04-14 01:50:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:02.188819 | orchestrator | 2025-04-14 01:50:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:02.188994 | orchestrator | 2025-04-14 01:51:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:05.244472 | orchestrator | 2025-04-14 01:51:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:05.244638 | orchestrator | 2025-04-14 01:51:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:08.293514 | orchestrator | 2025-04-14 01:51:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:08.293658 | orchestrator | 2025-04-14 01:51:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:11.342968 | orchestrator | 2025-04-14 01:51:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:11.343107 | orchestrator | 2025-04-14 01:51:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:14.388364 | orchestrator | 2025-04-14 01:51:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:14.388515 | orchestrator | 2025-04-14 01:51:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:17.430637 | orchestrator | 2025-04-14 01:51:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:17.430776 | orchestrator | 2025-04-14 01:51:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:20.481592 | orchestrator | 2025-04-14 01:51:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:20.481703 | orchestrator | 2025-04-14 01:51:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:23.531484 | orchestrator | 2025-04-14 01:51:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:23.531620 | orchestrator | 2025-04-14 01:51:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:26.588393 | orchestrator | 2025-04-14 01:51:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:26.588537 | orchestrator | 2025-04-14 01:51:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:29.639242 | orchestrator | 2025-04-14 01:51:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:29.639383 | orchestrator | 2025-04-14 01:51:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:32.690380 | orchestrator | 2025-04-14 01:51:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:32.690525 | orchestrator | 2025-04-14 01:51:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:35.736855 | orchestrator | 2025-04-14 01:51:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:35.736999 | orchestrator | 2025-04-14 01:51:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:38.788440 | orchestrator | 2025-04-14 01:51:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:38.788606 | orchestrator | 2025-04-14 01:51:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:41.840173 | orchestrator | 2025-04-14 01:51:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:41.840346 | orchestrator | 2025-04-14 01:51:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:44.887155 | orchestrator | 2025-04-14 01:51:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:44.887375 | orchestrator | 2025-04-14 01:51:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:47.934747 | orchestrator | 2025-04-14 01:51:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:47.934904 | orchestrator | 2025-04-14 01:51:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:50.980172 | orchestrator | 2025-04-14 01:51:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:50.980378 | orchestrator | 2025-04-14 01:51:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:54.023615 | orchestrator | 2025-04-14 01:51:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:54.023726 | orchestrator | 2025-04-14 01:51:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:51:57.078628 | orchestrator | 2025-04-14 01:51:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:51:57.078792 | orchestrator | 2025-04-14 01:51:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:00.134278 | orchestrator | 2025-04-14 01:51:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:00.134415 | orchestrator | 2025-04-14 01:52:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:03.180547 | orchestrator | 2025-04-14 01:52:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:03.180690 | orchestrator | 2025-04-14 01:52:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:06.235835 | orchestrator | 2025-04-14 01:52:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:06.236005 | orchestrator | 2025-04-14 01:52:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:09.293663 | orchestrator | 2025-04-14 01:52:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:09.293803 | orchestrator | 2025-04-14 01:52:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:12.346633 | orchestrator | 2025-04-14 01:52:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:12.346766 | orchestrator | 2025-04-14 01:52:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:15.397490 | orchestrator | 2025-04-14 01:52:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:15.397626 | orchestrator | 2025-04-14 01:52:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:18.453889 | orchestrator | 2025-04-14 01:52:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:18.454078 | orchestrator | 2025-04-14 01:52:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:21.506481 | orchestrator | 2025-04-14 01:52:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:21.506679 | orchestrator | 2025-04-14 01:52:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:21.506771 | orchestrator | 2025-04-14 01:52:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:24.559951 | orchestrator | 2025-04-14 01:52:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:27.607413 | orchestrator | 2025-04-14 01:52:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:27.607579 | orchestrator | 2025-04-14 01:52:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:30.655826 | orchestrator | 2025-04-14 01:52:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:30.655963 | orchestrator | 2025-04-14 01:52:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:33.707922 | orchestrator | 2025-04-14 01:52:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:33.708123 | orchestrator | 2025-04-14 01:52:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:36.755652 | orchestrator | 2025-04-14 01:52:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:36.755794 | orchestrator | 2025-04-14 01:52:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:39.807963 | orchestrator | 2025-04-14 01:52:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:39.808102 | orchestrator | 2025-04-14 01:52:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:42.855183 | orchestrator | 2025-04-14 01:52:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:42.855365 | orchestrator | 2025-04-14 01:52:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:45.906932 | orchestrator | 2025-04-14 01:52:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:45.907068 | orchestrator | 2025-04-14 01:52:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:48.951682 | orchestrator | 2025-04-14 01:52:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:48.951824 | orchestrator | 2025-04-14 01:52:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:51.996619 | orchestrator | 2025-04-14 01:52:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:51.996762 | orchestrator | 2025-04-14 01:52:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:55.051525 | orchestrator | 2025-04-14 01:52:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:55.051669 | orchestrator | 2025-04-14 01:52:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:52:58.106454 | orchestrator | 2025-04-14 01:52:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:52:58.106595 | orchestrator | 2025-04-14 01:52:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:01.154145 | orchestrator | 2025-04-14 01:52:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:01.154342 | orchestrator | 2025-04-14 01:53:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:04.200710 | orchestrator | 2025-04-14 01:53:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:04.200836 | orchestrator | 2025-04-14 01:53:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:07.256458 | orchestrator | 2025-04-14 01:53:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:07.256613 | orchestrator | 2025-04-14 01:53:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:10.309493 | orchestrator | 2025-04-14 01:53:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:10.309672 | orchestrator | 2025-04-14 01:53:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:13.359224 | orchestrator | 2025-04-14 01:53:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:13.359409 | orchestrator | 2025-04-14 01:53:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:16.401935 | orchestrator | 2025-04-14 01:53:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:16.402182 | orchestrator | 2025-04-14 01:53:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:19.453706 | orchestrator | 2025-04-14 01:53:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:19.453876 | orchestrator | 2025-04-14 01:53:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:22.505168 | orchestrator | 2025-04-14 01:53:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:22.505380 | orchestrator | 2025-04-14 01:53:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:22.506114 | orchestrator | 2025-04-14 01:53:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:25.559136 | orchestrator | 2025-04-14 01:53:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:28.612779 | orchestrator | 2025-04-14 01:53:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:28.612914 | orchestrator | 2025-04-14 01:53:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:31.653645 | orchestrator | 2025-04-14 01:53:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:31.653781 | orchestrator | 2025-04-14 01:53:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:34.704509 | orchestrator | 2025-04-14 01:53:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:34.704652 | orchestrator | 2025-04-14 01:53:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:37.757745 | orchestrator | 2025-04-14 01:53:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:37.757889 | orchestrator | 2025-04-14 01:53:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:40.817485 | orchestrator | 2025-04-14 01:53:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:40.817663 | orchestrator | 2025-04-14 01:53:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:43.871216 | orchestrator | 2025-04-14 01:53:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:43.871466 | orchestrator | 2025-04-14 01:53:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:46.927122 | orchestrator | 2025-04-14 01:53:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:46.927271 | orchestrator | 2025-04-14 01:53:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:49.975045 | orchestrator | 2025-04-14 01:53:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:49.975172 | orchestrator | 2025-04-14 01:53:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:53.022358 | orchestrator | 2025-04-14 01:53:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:53.022514 | orchestrator | 2025-04-14 01:53:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:56.065974 | orchestrator | 2025-04-14 01:53:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:56.066178 | orchestrator | 2025-04-14 01:53:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:53:59.114522 | orchestrator | 2025-04-14 01:53:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:53:59.114668 | orchestrator | 2025-04-14 01:53:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:02.163023 | orchestrator | 2025-04-14 01:53:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:02.163163 | orchestrator | 2025-04-14 01:54:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:05.215613 | orchestrator | 2025-04-14 01:54:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:05.215763 | orchestrator | 2025-04-14 01:54:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:08.259529 | orchestrator | 2025-04-14 01:54:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:08.259672 | orchestrator | 2025-04-14 01:54:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:11.304588 | orchestrator | 2025-04-14 01:54:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:11.304728 | orchestrator | 2025-04-14 01:54:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:14.356274 | orchestrator | 2025-04-14 01:54:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:14.356454 | orchestrator | 2025-04-14 01:54:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:17.402489 | orchestrator | 2025-04-14 01:54:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:17.402631 | orchestrator | 2025-04-14 01:54:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:20.456945 | orchestrator | 2025-04-14 01:54:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:20.457080 | orchestrator | 2025-04-14 01:54:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:23.513201 | orchestrator | 2025-04-14 01:54:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:23.513373 | orchestrator | 2025-04-14 01:54:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:26.568397 | orchestrator | 2025-04-14 01:54:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:26.568543 | orchestrator | 2025-04-14 01:54:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:29.621621 | orchestrator | 2025-04-14 01:54:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:29.621748 | orchestrator | 2025-04-14 01:54:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:29.621873 | orchestrator | 2025-04-14 01:54:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:32.669696 | orchestrator | 2025-04-14 01:54:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:35.720629 | orchestrator | 2025-04-14 01:54:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:35.720791 | orchestrator | 2025-04-14 01:54:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:38.773107 | orchestrator | 2025-04-14 01:54:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:38.773247 | orchestrator | 2025-04-14 01:54:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:41.823644 | orchestrator | 2025-04-14 01:54:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:41.823787 | orchestrator | 2025-04-14 01:54:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:44.875439 | orchestrator | 2025-04-14 01:54:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:44.875606 | orchestrator | 2025-04-14 01:54:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:47.911654 | orchestrator | 2025-04-14 01:54:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:47.911800 | orchestrator | 2025-04-14 01:54:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:50.953436 | orchestrator | 2025-04-14 01:54:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:50.953602 | orchestrator | 2025-04-14 01:54:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:54.005620 | orchestrator | 2025-04-14 01:54:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:54.005755 | orchestrator | 2025-04-14 01:54:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:57.059733 | orchestrator | 2025-04-14 01:54:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:54:57.059876 | orchestrator | 2025-04-14 01:54:57 | INFO  | Task dda40413-aecd-45ae-a018-160218d02110 is in state STARTED 2025-04-14 01:54:57.061036 | orchestrator | 2025-04-14 01:54:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:54:57.061358 | orchestrator | 2025-04-14 01:54:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:00.105140 | orchestrator | 2025-04-14 01:55:00 | INFO  | Task dda40413-aecd-45ae-a018-160218d02110 is in state STARTED 2025-04-14 01:55:00.106853 | orchestrator | 2025-04-14 01:55:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:03.174196 | orchestrator | 2025-04-14 01:55:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:03.174383 | orchestrator | 2025-04-14 01:55:03 | INFO  | Task dda40413-aecd-45ae-a018-160218d02110 is in state STARTED 2025-04-14 01:55:03.175151 | orchestrator | 2025-04-14 01:55:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:06.243186 | orchestrator | 2025-04-14 01:55:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:06.243327 | orchestrator | 2025-04-14 01:55:06 | INFO  | Task dda40413-aecd-45ae-a018-160218d02110 is in state SUCCESS 2025-04-14 01:55:06.244797 | orchestrator | 2025-04-14 01:55:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:09.303519 | orchestrator | 2025-04-14 01:55:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:09.303653 | orchestrator | 2025-04-14 01:55:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:12.351532 | orchestrator | 2025-04-14 01:55:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:12.351677 | orchestrator | 2025-04-14 01:55:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:12.351764 | orchestrator | 2025-04-14 01:55:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:15.410709 | orchestrator | 2025-04-14 01:55:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:18.455098 | orchestrator | 2025-04-14 01:55:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:18.455231 | orchestrator | 2025-04-14 01:55:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:21.518562 | orchestrator | 2025-04-14 01:55:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:21.518700 | orchestrator | 2025-04-14 01:55:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:24.572651 | orchestrator | 2025-04-14 01:55:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:24.572797 | orchestrator | 2025-04-14 01:55:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:27.623151 | orchestrator | 2025-04-14 01:55:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:27.623289 | orchestrator | 2025-04-14 01:55:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:30.676082 | orchestrator | 2025-04-14 01:55:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:30.676228 | orchestrator | 2025-04-14 01:55:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:33.726187 | orchestrator | 2025-04-14 01:55:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:33.726323 | orchestrator | 2025-04-14 01:55:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:36.769598 | orchestrator | 2025-04-14 01:55:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:36.769742 | orchestrator | 2025-04-14 01:55:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:39.816080 | orchestrator | 2025-04-14 01:55:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:39.816204 | orchestrator | 2025-04-14 01:55:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:42.867675 | orchestrator | 2025-04-14 01:55:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:42.867821 | orchestrator | 2025-04-14 01:55:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:45.925711 | orchestrator | 2025-04-14 01:55:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:45.925882 | orchestrator | 2025-04-14 01:55:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:48.973220 | orchestrator | 2025-04-14 01:55:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:48.973399 | orchestrator | 2025-04-14 01:55:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:52.028847 | orchestrator | 2025-04-14 01:55:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:52.028984 | orchestrator | 2025-04-14 01:55:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:55.070648 | orchestrator | 2025-04-14 01:55:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:55.070766 | orchestrator | 2025-04-14 01:55:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:55:58.118225 | orchestrator | 2025-04-14 01:55:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:55:58.118358 | orchestrator | 2025-04-14 01:55:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:01.176563 | orchestrator | 2025-04-14 01:55:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:01.176718 | orchestrator | 2025-04-14 01:56:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:04.230508 | orchestrator | 2025-04-14 01:56:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:04.230648 | orchestrator | 2025-04-14 01:56:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:07.283825 | orchestrator | 2025-04-14 01:56:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:07.283992 | orchestrator | 2025-04-14 01:56:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:10.329132 | orchestrator | 2025-04-14 01:56:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:10.329264 | orchestrator | 2025-04-14 01:56:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:10.329459 | orchestrator | 2025-04-14 01:56:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:13.389654 | orchestrator | 2025-04-14 01:56:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:16.434135 | orchestrator | 2025-04-14 01:56:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:16.434227 | orchestrator | 2025-04-14 01:56:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:19.481459 | orchestrator | 2025-04-14 01:56:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:19.481601 | orchestrator | 2025-04-14 01:56:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:22.537870 | orchestrator | 2025-04-14 01:56:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:22.538112 | orchestrator | 2025-04-14 01:56:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:22.538220 | orchestrator | 2025-04-14 01:56:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:25.589354 | orchestrator | 2025-04-14 01:56:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:28.634759 | orchestrator | 2025-04-14 01:56:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:28.634861 | orchestrator | 2025-04-14 01:56:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:31.678083 | orchestrator | 2025-04-14 01:56:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:31.678230 | orchestrator | 2025-04-14 01:56:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:34.723305 | orchestrator | 2025-04-14 01:56:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:34.723595 | orchestrator | 2025-04-14 01:56:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:37.766300 | orchestrator | 2025-04-14 01:56:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:37.766474 | orchestrator | 2025-04-14 01:56:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:40.812791 | orchestrator | 2025-04-14 01:56:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:40.812933 | orchestrator | 2025-04-14 01:56:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:43.868590 | orchestrator | 2025-04-14 01:56:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:43.868738 | orchestrator | 2025-04-14 01:56:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:46.921697 | orchestrator | 2025-04-14 01:56:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:46.921833 | orchestrator | 2025-04-14 01:56:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:49.971181 | orchestrator | 2025-04-14 01:56:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:49.971324 | orchestrator | 2025-04-14 01:56:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:53.014565 | orchestrator | 2025-04-14 01:56:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:53.014708 | orchestrator | 2025-04-14 01:56:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:56.058596 | orchestrator | 2025-04-14 01:56:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:56.058759 | orchestrator | 2025-04-14 01:56:56 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:56:59.107854 | orchestrator | 2025-04-14 01:56:56 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:56:59.107984 | orchestrator | 2025-04-14 01:56:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:02.156910 | orchestrator | 2025-04-14 01:56:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:02.157066 | orchestrator | 2025-04-14 01:57:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:05.202722 | orchestrator | 2025-04-14 01:57:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:05.202865 | orchestrator | 2025-04-14 01:57:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:08.253320 | orchestrator | 2025-04-14 01:57:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:08.253516 | orchestrator | 2025-04-14 01:57:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:11.301383 | orchestrator | 2025-04-14 01:57:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:11.301552 | orchestrator | 2025-04-14 01:57:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:14.356602 | orchestrator | 2025-04-14 01:57:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:14.356773 | orchestrator | 2025-04-14 01:57:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:17.405571 | orchestrator | 2025-04-14 01:57:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:17.405715 | orchestrator | 2025-04-14 01:57:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:20.455583 | orchestrator | 2025-04-14 01:57:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:20.455729 | orchestrator | 2025-04-14 01:57:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:23.506276 | orchestrator | 2025-04-14 01:57:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:23.506415 | orchestrator | 2025-04-14 01:57:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:26.545110 | orchestrator | 2025-04-14 01:57:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:26.545251 | orchestrator | 2025-04-14 01:57:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:29.598493 | orchestrator | 2025-04-14 01:57:26 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:29.598593 | orchestrator | 2025-04-14 01:57:29 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:32.652693 | orchestrator | 2025-04-14 01:57:29 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:32.652844 | orchestrator | 2025-04-14 01:57:32 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:35.701857 | orchestrator | 2025-04-14 01:57:32 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:35.701999 | orchestrator | 2025-04-14 01:57:35 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:38.753601 | orchestrator | 2025-04-14 01:57:35 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:38.753744 | orchestrator | 2025-04-14 01:57:38 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:41.795352 | orchestrator | 2025-04-14 01:57:38 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:41.795550 | orchestrator | 2025-04-14 01:57:41 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:44.848000 | orchestrator | 2025-04-14 01:57:41 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:44.848114 | orchestrator | 2025-04-14 01:57:44 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:47.897535 | orchestrator | 2025-04-14 01:57:44 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:47.897700 | orchestrator | 2025-04-14 01:57:47 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:50.949649 | orchestrator | 2025-04-14 01:57:47 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:50.949828 | orchestrator | 2025-04-14 01:57:50 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:54.000730 | orchestrator | 2025-04-14 01:57:50 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:54.000879 | orchestrator | 2025-04-14 01:57:53 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:57:57.048435 | orchestrator | 2025-04-14 01:57:53 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:57:57.048642 | orchestrator | 2025-04-14 01:57:57 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:00.099419 | orchestrator | 2025-04-14 01:57:57 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:00.099595 | orchestrator | 2025-04-14 01:58:00 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:03.147950 | orchestrator | 2025-04-14 01:58:00 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:03.148089 | orchestrator | 2025-04-14 01:58:03 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:06.189829 | orchestrator | 2025-04-14 01:58:03 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:06.189977 | orchestrator | 2025-04-14 01:58:06 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:09.236849 | orchestrator | 2025-04-14 01:58:06 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:09.237012 | orchestrator | 2025-04-14 01:58:09 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:12.282614 | orchestrator | 2025-04-14 01:58:09 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:12.282731 | orchestrator | 2025-04-14 01:58:12 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:15.329248 | orchestrator | 2025-04-14 01:58:12 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:15.329416 | orchestrator | 2025-04-14 01:58:15 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:18.382189 | orchestrator | 2025-04-14 01:58:15 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:18.382332 | orchestrator | 2025-04-14 01:58:18 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:21.431291 | orchestrator | 2025-04-14 01:58:18 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:21.431402 | orchestrator | 2025-04-14 01:58:21 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:24.475232 | orchestrator | 2025-04-14 01:58:21 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:24.475378 | orchestrator | 2025-04-14 01:58:24 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:27.520262 | orchestrator | 2025-04-14 01:58:24 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:27.520408 | orchestrator | 2025-04-14 01:58:27 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:30.577289 | orchestrator | 2025-04-14 01:58:27 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:30.577434 | orchestrator | 2025-04-14 01:58:30 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:33.627875 | orchestrator | 2025-04-14 01:58:30 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:33.628020 | orchestrator | 2025-04-14 01:58:33 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:36.675674 | orchestrator | 2025-04-14 01:58:33 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:36.675817 | orchestrator | 2025-04-14 01:58:36 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:39.724729 | orchestrator | 2025-04-14 01:58:36 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:39.724884 | orchestrator | 2025-04-14 01:58:39 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:42.764275 | orchestrator | 2025-04-14 01:58:39 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:42.764421 | orchestrator | 2025-04-14 01:58:42 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:45.813994 | orchestrator | 2025-04-14 01:58:42 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:45.814173 | orchestrator | 2025-04-14 01:58:45 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:48.867131 | orchestrator | 2025-04-14 01:58:45 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:48.867312 | orchestrator | 2025-04-14 01:58:48 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:51.922320 | orchestrator | 2025-04-14 01:58:48 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:51.922484 | orchestrator | 2025-04-14 01:58:51 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:54.973764 | orchestrator | 2025-04-14 01:58:51 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:54.973915 | orchestrator | 2025-04-14 01:58:54 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:58:58.023649 | orchestrator | 2025-04-14 01:58:54 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:58:58.023795 | orchestrator | 2025-04-14 01:58:58 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:01.067226 | orchestrator | 2025-04-14 01:58:58 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:01.067363 | orchestrator | 2025-04-14 01:59:01 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:04.124018 | orchestrator | 2025-04-14 01:59:01 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:04.124378 | orchestrator | 2025-04-14 01:59:04 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:07.181576 | orchestrator | 2025-04-14 01:59:04 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:07.181736 | orchestrator | 2025-04-14 01:59:07 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:10.225417 | orchestrator | 2025-04-14 01:59:07 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:10.225619 | orchestrator | 2025-04-14 01:59:10 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:13.281604 | orchestrator | 2025-04-14 01:59:10 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:13.281746 | orchestrator | 2025-04-14 01:59:13 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:16.341041 | orchestrator | 2025-04-14 01:59:13 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:16.341193 | orchestrator | 2025-04-14 01:59:16 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:19.395585 | orchestrator | 2025-04-14 01:59:16 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:19.395731 | orchestrator | 2025-04-14 01:59:19 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:22.440966 | orchestrator | 2025-04-14 01:59:19 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:22.441097 | orchestrator | 2025-04-14 01:59:22 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:25.482689 | orchestrator | 2025-04-14 01:59:22 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:25.482841 | orchestrator | 2025-04-14 01:59:25 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:28.537367 | orchestrator | 2025-04-14 01:59:25 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:28.537590 | orchestrator | 2025-04-14 01:59:28 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:28.537784 | orchestrator | 2025-04-14 01:59:28 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:31.594268 | orchestrator | 2025-04-14 01:59:31 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:34.647029 | orchestrator | 2025-04-14 01:59:31 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:34.647157 | orchestrator | 2025-04-14 01:59:34 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:37.699827 | orchestrator | 2025-04-14 01:59:34 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:37.699971 | orchestrator | 2025-04-14 01:59:37 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:40.752055 | orchestrator | 2025-04-14 01:59:37 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:40.752202 | orchestrator | 2025-04-14 01:59:40 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:43.797246 | orchestrator | 2025-04-14 01:59:40 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:43.797393 | orchestrator | 2025-04-14 01:59:43 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:46.846501 | orchestrator | 2025-04-14 01:59:43 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:46.846658 | orchestrator | 2025-04-14 01:59:46 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:49.890339 | orchestrator | 2025-04-14 01:59:46 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:49.890485 | orchestrator | 2025-04-14 01:59:49 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:52.945262 | orchestrator | 2025-04-14 01:59:49 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:52.945406 | orchestrator | 2025-04-14 01:59:52 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:55.992402 | orchestrator | 2025-04-14 01:59:52 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:55.992540 | orchestrator | 2025-04-14 01:59:55 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 01:59:59.039872 | orchestrator | 2025-04-14 01:59:55 | INFO  | Wait 1 second(s) until the next check 2025-04-14 01:59:59.040038 | orchestrator | 2025-04-14 01:59:59 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:02.088158 | orchestrator | 2025-04-14 01:59:59 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:02.088338 | orchestrator | 2025-04-14 02:00:02 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:05.140133 | orchestrator | 2025-04-14 02:00:02 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:05.140280 | orchestrator | 2025-04-14 02:00:05 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:08.195757 | orchestrator | 2025-04-14 02:00:05 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:08.195893 | orchestrator | 2025-04-14 02:00:08 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:11.246642 | orchestrator | 2025-04-14 02:00:08 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:11.246773 | orchestrator | 2025-04-14 02:00:11 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:14.291797 | orchestrator | 2025-04-14 02:00:11 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:14.291918 | orchestrator | 2025-04-14 02:00:14 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:14.294267 | orchestrator | 2025-04-14 02:00:14 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:17.342382 | orchestrator | 2025-04-14 02:00:17 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:20.385248 | orchestrator | 2025-04-14 02:00:17 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:20.385375 | orchestrator | 2025-04-14 02:00:20 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:23.440076 | orchestrator | 2025-04-14 02:00:20 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:23.440216 | orchestrator | 2025-04-14 02:00:23 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:26.485914 | orchestrator | 2025-04-14 02:00:23 | INFO  | Wait 1 second(s) until the next check 2025-04-14 02:00:26.486125 | orchestrator | 2025-04-14 02:00:26 | INFO  | Task afc851a2-7042-41e3-be43-561439f9152f is in state STARTED 2025-04-14 02:00:29.397406 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-04-14 02:00:29.402358 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-14 02:00:30.131333 | 2025-04-14 02:00:30.131512 | PLAY [Post output play] 2025-04-14 02:00:30.161202 | 2025-04-14 02:00:30.161363 | LOOP [stage-output : Register sources] 2025-04-14 02:00:30.247161 | 2025-04-14 02:00:30.247465 | TASK [stage-output : Check sudo] 2025-04-14 02:00:30.947098 | orchestrator | sudo: a password is required 2025-04-14 02:00:31.293082 | orchestrator | ok: Runtime: 0:00:00.016446 2025-04-14 02:00:31.301812 | 2025-04-14 02:00:31.301927 | LOOP [stage-output : Set source and destination for files and folders] 2025-04-14 02:00:31.340861 | 2025-04-14 02:00:31.341316 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-04-14 02:00:31.438213 | orchestrator | ok 2025-04-14 02:00:31.448946 | 2025-04-14 02:00:31.449067 | LOOP [stage-output : Ensure target folders exist] 2025-04-14 02:00:31.913052 | orchestrator | ok: "docs" 2025-04-14 02:00:31.913390 | 2025-04-14 02:00:32.155245 | orchestrator | ok: "artifacts" 2025-04-14 02:00:32.405189 | orchestrator | ok: "logs" 2025-04-14 02:00:32.425009 | 2025-04-14 02:00:32.425165 | LOOP [stage-output : Copy files and folders to staging folder] 2025-04-14 02:00:32.465221 | 2025-04-14 02:00:32.465461 | TASK [stage-output : Make all log files readable] 2025-04-14 02:00:32.775224 | orchestrator | ok 2025-04-14 02:00:32.786037 | 2025-04-14 02:00:32.786162 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-04-14 02:00:32.841873 | orchestrator | skipping: Conditional result was False 2025-04-14 02:00:32.856816 | 2025-04-14 02:00:32.856970 | TASK [stage-output : Discover log files for compression] 2025-04-14 02:00:32.884644 | orchestrator | skipping: Conditional result was False 2025-04-14 02:00:32.901954 | 2025-04-14 02:00:32.902107 | LOOP [stage-output : Archive everything from logs] 2025-04-14 02:00:32.973516 | 2025-04-14 02:00:32.973684 | PLAY [Post cleanup play] 2025-04-14 02:00:32.997423 | 2025-04-14 02:00:32.997542 | TASK [Set cloud fact (Zuul deployment)] 2025-04-14 02:00:33.074327 | orchestrator | ok 2025-04-14 02:00:33.090201 | 2025-04-14 02:00:33.090357 | TASK [Set cloud fact (local deployment)] 2025-04-14 02:00:33.127342 | orchestrator | skipping: Conditional result was False 2025-04-14 02:00:33.142086 | 2025-04-14 02:00:33.142242 | TASK [Clean the cloud environment] 2025-04-14 02:00:33.774721 | orchestrator | 2025-04-14 02:00:33 - clean up servers 2025-04-14 02:00:37.640024 | orchestrator | 2025-04-14 02:00:37 - testbed-manager 2025-04-14 02:00:37.751073 | orchestrator | 2025-04-14 02:00:37 - testbed-node-2 2025-04-14 02:00:37.877093 | orchestrator | 2025-04-14 02:00:37 - testbed-node-1 2025-04-14 02:00:37.991945 | orchestrator | 2025-04-14 02:00:37 - testbed-node-4 2025-04-14 02:00:38.107334 | orchestrator | 2025-04-14 02:00:38 - testbed-node-0 2025-04-14 02:00:38.250281 | orchestrator | 2025-04-14 02:00:38 - testbed-node-3 2025-04-14 02:00:38.341233 | orchestrator | 2025-04-14 02:00:38 - testbed-node-5 2025-04-14 02:00:38.452678 | orchestrator | 2025-04-14 02:00:38 - clean up keypairs 2025-04-14 02:00:38.474064 | orchestrator | 2025-04-14 02:00:38 - testbed 2025-04-14 02:00:38.497686 | orchestrator | 2025-04-14 02:00:38 - wait for servers to be gone 2025-04-14 02:00:49.785507 | orchestrator | 2025-04-14 02:00:49 - clean up ports 2025-04-14 02:00:50.014354 | orchestrator | 2025-04-14 02:00:50 - 05ae044c-d3d5-4faa-a6cc-30284abac626 2025-04-14 02:00:50.372892 | orchestrator | 2025-04-14 02:00:50 - 1887503b-afdb-4bbd-8f23-7d9515a3500b 2025-04-14 02:00:50.563195 | orchestrator | 2025-04-14 02:00:50 - 36d36d40-20d7-47cc-b7c9-d464c834b798 2025-04-14 02:00:50.765991 | orchestrator | 2025-04-14 02:00:50 - 7f4afe13-6a95-42c2-aca0-a9e15c6c7aab 2025-04-14 02:00:50.975205 | orchestrator | 2025-04-14 02:00:50 - 9f6f08de-95e9-466a-aa72-e8d571cb7a0f 2025-04-14 02:00:51.176482 | orchestrator | 2025-04-14 02:00:51 - aba87697-cd50-4f43-a7f3-dddf36044400 2025-04-14 02:00:51.373943 | orchestrator | 2025-04-14 02:00:51 - ccac8439-b890-400f-ac52-c95bcf2715c5 2025-04-14 02:00:51.576661 | orchestrator | 2025-04-14 02:00:51 - clean up volumes 2025-04-14 02:00:51.741211 | orchestrator | 2025-04-14 02:00:51 - testbed-volume-2-node-base 2025-04-14 02:00:51.780278 | orchestrator | 2025-04-14 02:00:51 - testbed-volume-5-node-base 2025-04-14 02:00:51.817255 | orchestrator | 2025-04-14 02:00:51 - testbed-volume-0-node-base 2025-04-14 02:00:51.856872 | orchestrator | 2025-04-14 02:00:51 - testbed-volume-3-node-base 2025-04-14 02:00:51.899503 | orchestrator | 2025-04-14 02:00:51 - testbed-volume-manager-base 2025-04-14 02:00:51.936788 | orchestrator | 2025-04-14 02:00:51 - testbed-volume-4-node-base 2025-04-14 02:00:51.972215 | orchestrator | 2025-04-14 02:00:51 - testbed-volume-14-node-2 2025-04-14 02:00:52.011652 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-9-node-3 2025-04-14 02:00:52.048628 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-16-node-4 2025-04-14 02:00:52.090477 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-15-node-3 2025-04-14 02:00:52.130382 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-10-node-4 2025-04-14 02:00:52.171766 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-1-node-base 2025-04-14 02:00:52.212564 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-17-node-5 2025-04-14 02:00:52.257551 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-1-node-1 2025-04-14 02:00:52.296690 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-6-node-0 2025-04-14 02:00:52.342173 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-13-node-1 2025-04-14 02:00:52.389281 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-8-node-2 2025-04-14 02:00:52.433605 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-7-node-1 2025-04-14 02:00:52.473940 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-2-node-2 2025-04-14 02:00:52.515957 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-11-node-5 2025-04-14 02:00:52.561158 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-4-node-4 2025-04-14 02:00:52.606846 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-12-node-0 2025-04-14 02:00:52.647399 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-0-node-0 2025-04-14 02:00:52.693660 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-3-node-3 2025-04-14 02:00:52.739041 | orchestrator | 2025-04-14 02:00:52 - testbed-volume-5-node-5 2025-04-14 02:00:52.777620 | orchestrator | 2025-04-14 02:00:52 - disconnect routers 2025-04-14 02:00:52.884966 | orchestrator | 2025-04-14 02:00:52 - testbed 2025-04-14 02:00:53.705140 | orchestrator | 2025-04-14 02:00:53 - clean up subnets 2025-04-14 02:00:53.752685 | orchestrator | 2025-04-14 02:00:53 - subnet-testbed-management 2025-04-14 02:00:53.884264 | orchestrator | 2025-04-14 02:00:53 - clean up networks 2025-04-14 02:00:54.084934 | orchestrator | 2025-04-14 02:00:54 - net-testbed-management 2025-04-14 02:00:54.331550 | orchestrator | 2025-04-14 02:00:54 - clean up security groups 2025-04-14 02:00:54.364747 | orchestrator | 2025-04-14 02:00:54 - testbed-node 2025-04-14 02:00:54.460040 | orchestrator | 2025-04-14 02:00:54 - testbed-management 2025-04-14 02:00:54.546183 | orchestrator | 2025-04-14 02:00:54 - clean up floating ips 2025-04-14 02:00:54.581668 | orchestrator | 2025-04-14 02:00:54 - 81.163.193.183 2025-04-14 02:00:54.986317 | orchestrator | 2025-04-14 02:00:54 - clean up routers 2025-04-14 02:00:55.078306 | orchestrator | 2025-04-14 02:00:55 - testbed 2025-04-14 02:00:55.766664 | orchestrator | changed 2025-04-14 02:00:55.803929 | 2025-04-14 02:00:55.804037 | PLAY RECAP 2025-04-14 02:00:55.804096 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-04-14 02:00:55.804133 | 2025-04-14 02:00:55.914382 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-04-14 02:00:55.917666 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-14 02:00:56.637389 | 2025-04-14 02:00:56.637546 | PLAY [Base post-fetch] 2025-04-14 02:00:56.667095 | 2025-04-14 02:00:56.667236 | TASK [fetch-output : Set log path for multiple nodes] 2025-04-14 02:00:56.734024 | orchestrator | skipping: Conditional result was False 2025-04-14 02:00:56.749442 | 2025-04-14 02:00:56.749619 | TASK [fetch-output : Set log path for single node] 2025-04-14 02:00:56.799705 | orchestrator | ok 2025-04-14 02:00:56.808988 | 2025-04-14 02:00:56.809112 | LOOP [fetch-output : Ensure local output dirs] 2025-04-14 02:00:57.313479 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/8b19518b04be443abf0d643941e8b221/work/logs" 2025-04-14 02:00:57.595694 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8b19518b04be443abf0d643941e8b221/work/artifacts" 2025-04-14 02:00:57.871252 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/8b19518b04be443abf0d643941e8b221/work/docs" 2025-04-14 02:00:57.896948 | 2025-04-14 02:00:57.897099 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-04-14 02:00:58.705203 | orchestrator | changed: .d..t...... ./ 2025-04-14 02:00:58.705593 | orchestrator | changed: All items complete 2025-04-14 02:00:58.705657 | 2025-04-14 02:00:59.306218 | orchestrator | changed: .d..t...... ./ 2025-04-14 02:00:59.890494 | orchestrator | changed: .d..t...... ./ 2025-04-14 02:00:59.918601 | 2025-04-14 02:00:59.918806 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-04-14 02:00:59.968097 | orchestrator | skipping: Conditional result was False 2025-04-14 02:00:59.974986 | orchestrator | skipping: Conditional result was False 2025-04-14 02:01:00.015929 | 2025-04-14 02:01:00.016044 | PLAY RECAP 2025-04-14 02:01:00.016098 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-04-14 02:01:00.016125 | 2025-04-14 02:01:00.138899 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-04-14 02:01:00.143947 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-14 02:01:00.866034 | 2025-04-14 02:01:00.866203 | PLAY [Base post] 2025-04-14 02:01:00.894953 | 2025-04-14 02:01:00.895100 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-04-14 02:01:01.974134 | orchestrator | changed 2025-04-14 02:01:02.014767 | 2025-04-14 02:01:02.014922 | PLAY RECAP 2025-04-14 02:01:02.014989 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-04-14 02:01:02.015053 | 2025-04-14 02:01:02.136412 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-04-14 02:01:02.139627 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-04-14 02:01:02.878424 | 2025-04-14 02:01:02.878601 | PLAY [Base post-logs] 2025-04-14 02:01:02.895296 | 2025-04-14 02:01:02.895454 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-04-14 02:01:03.364795 | localhost | changed 2025-04-14 02:01:03.370114 | 2025-04-14 02:01:03.370317 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-04-14 02:01:03.422600 | localhost | ok 2025-04-14 02:01:03.433854 | 2025-04-14 02:01:03.434028 | TASK [Set zuul-log-path fact] 2025-04-14 02:01:03.464635 | localhost | ok 2025-04-14 02:01:03.476717 | 2025-04-14 02:01:03.476884 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-04-14 02:01:03.518090 | localhost | ok 2025-04-14 02:01:03.530230 | 2025-04-14 02:01:03.530450 | TASK [upload-logs : Create log directories] 2025-04-14 02:01:04.055418 | localhost | changed 2025-04-14 02:01:04.059966 | 2025-04-14 02:01:04.060082 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-04-14 02:01:04.578420 | localhost -> localhost | ok: Runtime: 0:00:00.005630 2025-04-14 02:01:04.588631 | 2025-04-14 02:01:04.588809 | TASK [upload-logs : Upload logs to log server] 2025-04-14 02:01:05.162707 | localhost | Output suppressed because no_log was given 2025-04-14 02:01:05.166971 | 2025-04-14 02:01:05.167109 | LOOP [upload-logs : Compress console log and json output] 2025-04-14 02:01:05.239024 | localhost | skipping: Conditional result was False 2025-04-14 02:01:05.256089 | localhost | skipping: Conditional result was False 2025-04-14 02:01:05.272602 | 2025-04-14 02:01:05.272805 | LOOP [upload-logs : Upload compressed console log and json output] 2025-04-14 02:01:05.344841 | localhost | skipping: Conditional result was False 2025-04-14 02:01:05.345488 | 2025-04-14 02:01:05.358159 | localhost | skipping: Conditional result was False 2025-04-14 02:01:05.366984 | 2025-04-14 02:01:05.367132 | LOOP [upload-logs : Upload console log and json output]